00:00:00.000 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 114 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3292 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.134 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.186 Fetching changes from the remote Git repository 00:00:00.188 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.238 Using shallow fetch with depth 1 00:00:00.238 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.238 > git --version # timeout=10 00:00:00.275 > git --version # 'git version 2.39.2' 00:00:00.275 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.304 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.304 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.714 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.727 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.740 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.741 > git config core.sparsecheckout # timeout=10 00:00:05.752 > git read-tree -mu HEAD # timeout=10 00:00:05.769 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.797 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.797 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.891 [Pipeline] Start of Pipeline 00:00:05.904 [Pipeline] library 00:00:05.905 Loading library shm_lib@master 00:00:05.906 Library shm_lib@master is cached. Copying from home. 00:00:05.921 [Pipeline] node 00:00:05.935 Running on GP14 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.937 [Pipeline] { 00:00:05.949 [Pipeline] catchError 00:00:05.950 [Pipeline] { 00:00:05.965 [Pipeline] wrap 00:00:05.976 [Pipeline] { 00:00:05.989 [Pipeline] stage 00:00:05.991 [Pipeline] { (Prologue) 00:00:06.201 [Pipeline] sh 00:00:06.497 + logger -p user.info -t JENKINS-CI 00:00:06.515 [Pipeline] echo 00:00:06.517 Node: GP14 00:00:06.523 [Pipeline] sh 00:00:06.819 [Pipeline] setCustomBuildProperty 00:00:06.830 [Pipeline] echo 00:00:06.831 Cleanup processes 00:00:06.837 [Pipeline] sh 00:00:07.117 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.117 4051300 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.131 [Pipeline] sh 00:00:07.413 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.413 ++ grep -v 'sudo pgrep' 00:00:07.413 ++ awk '{print $1}' 00:00:07.413 + sudo kill -9 00:00:07.413 + true 00:00:07.428 [Pipeline] cleanWs 00:00:07.439 [WS-CLEANUP] Deleting project workspace... 00:00:07.439 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.445 [WS-CLEANUP] done 00:00:07.450 [Pipeline] setCustomBuildProperty 00:00:07.467 [Pipeline] sh 00:00:07.749 + sudo git config --global --replace-all safe.directory '*' 00:00:07.827 [Pipeline] httpRequest 00:00:07.848 [Pipeline] echo 00:00:07.850 Sorcerer 10.211.164.101 is alive 00:00:07.860 [Pipeline] httpRequest 00:00:07.864 HttpMethod: GET 00:00:07.864 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.865 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.874 Response Code: HTTP/1.1 200 OK 00:00:07.874 Success: Status code 200 is in the accepted range: 200,404 00:00:07.875 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.932 [Pipeline] sh 00:00:10.215 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.233 [Pipeline] httpRequest 00:00:10.253 [Pipeline] echo 00:00:10.255 Sorcerer 10.211.164.101 is alive 00:00:10.264 [Pipeline] httpRequest 00:00:10.269 HttpMethod: GET 00:00:10.270 URL: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:10.271 Sending request to url: http://10.211.164.101/packages/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:10.281 Response Code: HTTP/1.1 200 OK 00:00:10.282 Success: Status code 200 is in the accepted range: 200,404 00:00:10.282 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:32.205 [Pipeline] sh 00:00:32.487 + tar --no-same-owner -xf spdk_241d0f3c94f275e2bee7a7c76d26b4d9fc729108.tar.gz 00:00:35.024 [Pipeline] sh 00:00:35.303 + git -C spdk log --oneline -n5 00:00:35.303 241d0f3c9 test: fix dpdk builds on ubuntu24 00:00:35.303 327de4622 test/bdev: Skip "hidden" nvme devices from the sysfs 00:00:35.303 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:35.303 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:35.303 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:35.322 [Pipeline] withCredentials 00:00:35.332 > git --version # timeout=10 00:00:35.345 > git --version # 'git version 2.39.2' 00:00:35.369 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:35.371 [Pipeline] { 00:00:35.380 [Pipeline] retry 00:00:35.383 [Pipeline] { 00:00:35.403 [Pipeline] sh 00:00:35.844 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:37.788 [Pipeline] } 00:00:37.813 [Pipeline] // retry 00:00:37.819 [Pipeline] } 00:00:37.841 [Pipeline] // withCredentials 00:00:37.850 [Pipeline] httpRequest 00:00:37.867 [Pipeline] echo 00:00:37.869 Sorcerer 10.211.164.101 is alive 00:00:37.879 [Pipeline] httpRequest 00:00:37.883 HttpMethod: GET 00:00:37.884 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:37.885 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:37.894 Response Code: HTTP/1.1 200 OK 00:00:37.895 Success: Status code 200 is in the accepted range: 200,404 00:00:37.895 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:41.418 [Pipeline] sh 00:00:41.698 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:43.091 [Pipeline] sh 00:00:43.372 + git -C dpdk log --oneline -n5 00:00:43.372 caf0f5d395 version: 22.11.4 00:00:43.372 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:43.372 dc9c799c7d vhost: fix missing spinlock unlock 00:00:43.372 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:43.372 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:43.382 [Pipeline] } 00:00:43.399 [Pipeline] // stage 00:00:43.408 [Pipeline] stage 00:00:43.410 [Pipeline] { (Prepare) 00:00:43.431 [Pipeline] writeFile 00:00:43.448 [Pipeline] sh 00:00:43.730 + logger -p user.info -t JENKINS-CI 00:00:43.743 [Pipeline] sh 00:00:44.027 + logger -p user.info -t JENKINS-CI 00:00:44.041 [Pipeline] sh 00:00:44.321 + cat autorun-spdk.conf 00:00:44.322 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.322 SPDK_TEST_NVMF=1 00:00:44.322 SPDK_TEST_NVME_CLI=1 00:00:44.322 SPDK_TEST_NVMF_NICS=mlx5 00:00:44.322 SPDK_RUN_UBSAN=1 00:00:44.322 NET_TYPE=phy 00:00:44.322 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:44.322 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:44.329 RUN_NIGHTLY=1 00:00:44.332 [Pipeline] readFile 00:00:44.353 [Pipeline] withEnv 00:00:44.355 [Pipeline] { 00:00:44.369 [Pipeline] sh 00:00:44.651 + set -ex 00:00:44.651 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:44.651 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:44.651 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.651 ++ SPDK_TEST_NVMF=1 00:00:44.651 ++ SPDK_TEST_NVME_CLI=1 00:00:44.651 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:44.651 ++ SPDK_RUN_UBSAN=1 00:00:44.651 ++ NET_TYPE=phy 00:00:44.651 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:44.651 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:44.651 ++ RUN_NIGHTLY=1 00:00:44.651 + case $SPDK_TEST_NVMF_NICS in 00:00:44.651 + DRIVERS=mlx5_ib 00:00:44.651 + [[ -n mlx5_ib ]] 00:00:44.651 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:44.651 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:52.768 rmmod: ERROR: Module irdma is not currently loaded 00:00:52.768 rmmod: ERROR: Module i40iw is not currently loaded 00:00:52.768 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:52.768 + true 00:00:52.768 + for D in $DRIVERS 00:00:52.768 + sudo modprobe mlx5_ib 00:00:52.768 + exit 0 00:00:52.777 [Pipeline] } 00:00:52.796 [Pipeline] // withEnv 00:00:52.801 [Pipeline] } 00:00:52.818 [Pipeline] // stage 00:00:52.828 [Pipeline] catchError 00:00:52.830 [Pipeline] { 00:00:52.873 [Pipeline] timeout 00:00:52.873 Timeout set to expire in 1 hr 0 min 00:00:52.876 [Pipeline] { 00:00:52.895 [Pipeline] stage 00:00:52.897 [Pipeline] { (Tests) 00:00:52.914 [Pipeline] sh 00:00:53.193 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:53.193 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:53.193 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:53.193 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:53.193 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:53.193 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:53.193 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:53.193 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:53.193 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:53.193 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:53.193 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:00:53.193 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:53.193 + source /etc/os-release 00:00:53.193 ++ NAME='Fedora Linux' 00:00:53.193 ++ VERSION='38 (Cloud Edition)' 00:00:53.193 ++ ID=fedora 00:00:53.193 ++ VERSION_ID=38 00:00:53.193 ++ VERSION_CODENAME= 00:00:53.193 ++ PLATFORM_ID=platform:f38 00:00:53.193 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:53.193 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:53.193 ++ LOGO=fedora-logo-icon 00:00:53.193 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:53.193 ++ HOME_URL=https://fedoraproject.org/ 00:00:53.193 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:53.193 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:53.193 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:53.193 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:53.193 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:53.193 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:53.193 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:53.193 ++ SUPPORT_END=2024-05-14 00:00:53.193 ++ VARIANT='Cloud Edition' 00:00:53.193 ++ VARIANT_ID=cloud 00:00:53.193 + uname -a 00:00:53.193 Linux spdk-gp-14 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:53.193 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:54.568 Hugepages 00:00:54.568 node hugesize free / total 00:00:54.568 node0 1048576kB 0 / 0 00:00:54.568 node0 2048kB 0 / 0 00:00:54.568 node1 1048576kB 0 / 0 00:00:54.568 node1 2048kB 0 / 0 00:00:54.568 00:00:54.568 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:54.568 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:54.568 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:54.568 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:54.568 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:54.568 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:54.568 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:54.568 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:54.568 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:54.568 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:54.568 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:54.568 + rm -f /tmp/spdk-ld-path 00:00:54.568 + source autorun-spdk.conf 00:00:54.568 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.568 ++ SPDK_TEST_NVMF=1 00:00:54.568 ++ SPDK_TEST_NVME_CLI=1 00:00:54.568 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:54.568 ++ SPDK_RUN_UBSAN=1 00:00:54.568 ++ NET_TYPE=phy 00:00:54.568 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.568 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:54.568 ++ RUN_NIGHTLY=1 00:00:54.568 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:54.568 + [[ -n '' ]] 00:00:54.568 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:54.568 + for M in /var/spdk/build-*-manifest.txt 00:00:54.568 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:54.568 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:54.568 + for M in /var/spdk/build-*-manifest.txt 00:00:54.568 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:54.568 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:54.568 ++ uname 00:00:54.568 + [[ Linux == \L\i\n\u\x ]] 00:00:54.568 + sudo dmesg -T 00:00:54.568 + sudo dmesg --clear 00:00:54.568 + dmesg_pid=4052120 00:00:54.568 + [[ Fedora Linux == FreeBSD ]] 00:00:54.568 + sudo dmesg -Tw 00:00:54.568 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:54.568 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:54.568 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:54.568 + [[ -x /usr/src/fio-static/fio ]] 00:00:54.568 + export FIO_BIN=/usr/src/fio-static/fio 00:00:54.568 + FIO_BIN=/usr/src/fio-static/fio 00:00:54.568 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:54.568 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:54.568 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:54.568 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:54.568 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:54.568 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:54.568 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:54.568 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:54.568 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:54.568 Test configuration: 00:00:54.568 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.568 SPDK_TEST_NVMF=1 00:00:54.568 SPDK_TEST_NVME_CLI=1 00:00:54.568 SPDK_TEST_NVMF_NICS=mlx5 00:00:54.568 SPDK_RUN_UBSAN=1 00:00:54.568 NET_TYPE=phy 00:00:54.568 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.568 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:54.568 RUN_NIGHTLY=1 13:59:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:54.568 13:59:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:54.568 13:59:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:54.568 13:59:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:54.568 13:59:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.568 13:59:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.568 13:59:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.568 13:59:21 -- paths/export.sh@5 -- $ export PATH 00:00:54.568 13:59:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:54.568 13:59:21 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:54.568 13:59:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:00:54.568 13:59:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721822361.XXXXXX 00:00:54.568 13:59:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721822361.HYkmFm 00:00:54.568 13:59:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:00:54.568 13:59:21 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:00:54.568 13:59:21 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:54.568 13:59:21 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:00:54.568 13:59:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:54.568 13:59:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:54.568 13:59:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:00:54.568 13:59:21 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:54.568 13:59:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.568 13:59:21 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:00:54.568 13:59:21 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:00:54.568 13:59:21 -- pm/common@17 -- $ local monitor 00:00:54.568 13:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.568 13:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.568 13:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.568 13:59:21 -- pm/common@21 -- $ date +%s 00:00:54.568 13:59:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:54.568 13:59:21 -- pm/common@21 -- $ date +%s 00:00:54.568 13:59:21 -- pm/common@25 -- $ sleep 1 00:00:54.568 13:59:21 -- pm/common@21 -- $ date +%s 00:00:54.568 13:59:21 -- pm/common@21 -- $ date +%s 00:00:54.569 13:59:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721822361 00:00:54.569 13:59:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721822361 00:00:54.569 13:59:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721822361 00:00:54.569 13:59:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721822361 00:00:54.569 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721822361_collect-vmstat.pm.log 00:00:54.569 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721822361_collect-cpu-load.pm.log 00:00:54.569 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721822361_collect-cpu-temp.pm.log 00:00:54.569 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721822361_collect-bmc-pm.bmc.pm.log 00:00:55.944 13:59:22 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:00:55.944 13:59:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:55.944 13:59:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:55.944 13:59:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:55.944 13:59:22 -- spdk/autobuild.sh@16 -- $ date -u 00:00:55.944 Wed Jul 24 11:59:22 AM UTC 2024 00:00:55.944 13:59:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:55.944 v24.05-15-g241d0f3c9 00:00:55.944 13:59:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:55.944 13:59:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:55.944 13:59:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:55.944 13:59:22 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:55.944 13:59:22 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:55.944 13:59:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.944 ************************************ 00:00:55.944 START TEST ubsan 00:00:55.944 ************************************ 00:00:55.944 13:59:22 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:55.944 using ubsan 00:00:55.944 00:00:55.944 real 0m0.000s 00:00:55.944 user 0m0.000s 00:00:55.944 sys 0m0.000s 00:00:55.944 13:59:22 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:55.944 13:59:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:55.944 ************************************ 00:00:55.944 END TEST ubsan 00:00:55.944 ************************************ 00:00:55.944 13:59:22 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:00:55.944 13:59:22 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:55.944 13:59:22 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:55.944 13:59:22 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:00:55.944 13:59:22 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:55.944 13:59:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.944 ************************************ 00:00:55.944 START TEST build_native_dpdk 00:00:55.944 ************************************ 00:00:55.944 13:59:22 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:00:55.944 caf0f5d395 version: 22.11.4 00:00:55.944 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:55.944 dc9c799c7d vhost: fix missing spinlock unlock 00:00:55.944 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:55.944 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:55.944 13:59:22 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:55.944 13:59:22 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:55.945 13:59:22 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:55.945 13:59:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:55.945 13:59:22 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:55.945 13:59:22 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:55.945 patching file config/rte_config.h 00:00:55.945 Hunk #1 succeeded at 60 (offset 1 line). 00:00:55.945 13:59:23 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:00:55.945 13:59:23 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:00:55.945 13:59:23 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:00:55.945 patching file lib/pcapng/rte_pcapng.c 00:00:55.945 Hunk #1 succeeded at 110 (offset -18 lines). 00:00:55.945 13:59:23 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:00:55.945 13:59:23 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:00:55.945 13:59:23 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:00:55.945 13:59:23 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:55.945 13:59:23 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:00.141 The Meson build system 00:01:00.141 Version: 1.3.1 00:01:00.141 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:00.141 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:00.141 Build type: native build 00:01:00.141 Program cat found: YES (/usr/bin/cat) 00:01:00.141 Project name: DPDK 00:01:00.141 Project version: 22.11.4 00:01:00.141 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:00.141 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:00.141 Host machine cpu family: x86_64 00:01:00.141 Host machine cpu: x86_64 00:01:00.141 Message: ## Building in Developer Mode ## 00:01:00.141 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:00.141 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:00.141 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:00.141 Program objdump found: YES (/usr/bin/objdump) 00:01:00.141 Program python3 found: YES (/usr/bin/python3) 00:01:00.141 Program cat found: YES (/usr/bin/cat) 00:01:00.141 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:00.141 Checking for size of "void *" : 8 00:01:00.141 Checking for size of "void *" : 8 (cached) 00:01:00.141 Library m found: YES 00:01:00.141 Library numa found: YES 00:01:00.141 Has header "numaif.h" : YES 00:01:00.141 Library fdt found: NO 00:01:00.141 Library execinfo found: NO 00:01:00.141 Has header "execinfo.h" : YES 00:01:00.141 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:00.141 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:00.141 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:00.141 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:00.141 Run-time dependency openssl found: YES 3.0.9 00:01:00.141 Run-time dependency libpcap found: YES 1.10.4 00:01:00.141 Has header "pcap.h" with dependency libpcap: YES 00:01:00.141 Compiler for C supports arguments -Wcast-qual: YES 00:01:00.141 Compiler for C supports arguments -Wdeprecated: YES 00:01:00.141 Compiler for C supports arguments -Wformat: YES 00:01:00.141 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:00.141 Compiler for C supports arguments -Wformat-security: NO 00:01:00.141 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:00.141 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:00.141 Compiler for C supports arguments -Wnested-externs: YES 00:01:00.141 Compiler for C supports arguments -Wold-style-definition: YES 00:01:00.141 Compiler for C supports arguments -Wpointer-arith: YES 00:01:00.141 Compiler for C supports arguments -Wsign-compare: YES 00:01:00.141 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:00.141 Compiler for C supports arguments -Wundef: YES 00:01:00.141 Compiler for C supports arguments -Wwrite-strings: YES 00:01:00.141 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:00.141 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:00.141 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:00.141 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:00.141 Compiler for C supports arguments -mavx512f: YES 00:01:00.141 Checking if "AVX512 checking" compiles: YES 00:01:00.141 Fetching value of define "__SSE4_2__" : 1 00:01:00.141 Fetching value of define "__AES__" : 1 00:01:00.141 Fetching value of define "__AVX__" : 1 00:01:00.141 Fetching value of define "__AVX2__" : (undefined) 00:01:00.141 Fetching value of define "__AVX512BW__" : (undefined) 00:01:00.141 Fetching value of define "__AVX512CD__" : (undefined) 00:01:00.141 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:00.141 Fetching value of define "__AVX512F__" : (undefined) 00:01:00.141 Fetching value of define "__AVX512VL__" : (undefined) 00:01:00.141 Fetching value of define "__PCLMUL__" : 1 00:01:00.141 Fetching value of define "__RDRND__" : 1 00:01:00.141 Fetching value of define "__RDSEED__" : (undefined) 00:01:00.141 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:00.141 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:00.141 Message: lib/kvargs: Defining dependency "kvargs" 00:01:00.141 Message: lib/telemetry: Defining dependency "telemetry" 00:01:00.141 Checking for function "getentropy" : YES 00:01:00.141 Message: lib/eal: Defining dependency "eal" 00:01:00.141 Message: lib/ring: Defining dependency "ring" 00:01:00.141 Message: lib/rcu: Defining dependency "rcu" 00:01:00.141 Message: lib/mempool: Defining dependency "mempool" 00:01:00.141 Message: lib/mbuf: Defining dependency "mbuf" 00:01:00.141 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:00.141 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:00.141 Compiler for C supports arguments -mpclmul: YES 00:01:00.141 Compiler for C supports arguments -maes: YES 00:01:00.141 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:00.141 Compiler for C supports arguments -mavx512bw: YES 00:01:00.141 Compiler for C supports arguments -mavx512dq: YES 00:01:00.141 Compiler for C supports arguments -mavx512vl: YES 00:01:00.141 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:00.141 Compiler for C supports arguments -mavx2: YES 00:01:00.141 Compiler for C supports arguments -mavx: YES 00:01:00.141 Message: lib/net: Defining dependency "net" 00:01:00.141 Message: lib/meter: Defining dependency "meter" 00:01:00.141 Message: lib/ethdev: Defining dependency "ethdev" 00:01:00.141 Message: lib/pci: Defining dependency "pci" 00:01:00.141 Message: lib/cmdline: Defining dependency "cmdline" 00:01:00.141 Message: lib/metrics: Defining dependency "metrics" 00:01:00.141 Message: lib/hash: Defining dependency "hash" 00:01:00.141 Message: lib/timer: Defining dependency "timer" 00:01:00.141 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:00.141 Compiler for C supports arguments -mavx2: YES (cached) 00:01:00.141 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:00.141 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:00.141 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:00.141 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:00.141 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:00.141 Message: lib/acl: Defining dependency "acl" 00:01:00.141 Message: lib/bbdev: Defining dependency "bbdev" 00:01:00.141 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:00.141 Run-time dependency libelf found: YES 0.190 00:01:00.141 Message: lib/bpf: Defining dependency "bpf" 00:01:00.141 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:00.141 Message: lib/compressdev: Defining dependency "compressdev" 00:01:00.141 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:00.141 Message: lib/distributor: Defining dependency "distributor" 00:01:00.141 Message: lib/efd: Defining dependency "efd" 00:01:00.141 Message: lib/eventdev: Defining dependency "eventdev" 00:01:00.141 Message: lib/gpudev: Defining dependency "gpudev" 00:01:00.141 Message: lib/gro: Defining dependency "gro" 00:01:00.141 Message: lib/gso: Defining dependency "gso" 00:01:00.141 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:00.141 Message: lib/jobstats: Defining dependency "jobstats" 00:01:00.141 Message: lib/latencystats: Defining dependency "latencystats" 00:01:00.141 Message: lib/lpm: Defining dependency "lpm" 00:01:00.141 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:00.141 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:00.141 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:00.141 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:00.141 Message: lib/member: Defining dependency "member" 00:01:00.141 Message: lib/pcapng: Defining dependency "pcapng" 00:01:00.141 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:00.141 Message: lib/power: Defining dependency "power" 00:01:00.141 Message: lib/rawdev: Defining dependency "rawdev" 00:01:00.141 Message: lib/regexdev: Defining dependency "regexdev" 00:01:00.141 Message: lib/dmadev: Defining dependency "dmadev" 00:01:00.141 Message: lib/rib: Defining dependency "rib" 00:01:00.141 Message: lib/reorder: Defining dependency "reorder" 00:01:00.141 Message: lib/sched: Defining dependency "sched" 00:01:00.141 Message: lib/security: Defining dependency "security" 00:01:00.141 Message: lib/stack: Defining dependency "stack" 00:01:00.141 Has header "linux/userfaultfd.h" : YES 00:01:00.141 Message: lib/vhost: Defining dependency "vhost" 00:01:00.141 Message: lib/ipsec: Defining dependency "ipsec" 00:01:00.141 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:00.141 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:00.141 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:00.141 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:00.141 Message: lib/fib: Defining dependency "fib" 00:01:00.141 Message: lib/port: Defining dependency "port" 00:01:00.141 Message: lib/pdump: Defining dependency "pdump" 00:01:00.141 Message: lib/table: Defining dependency "table" 00:01:00.141 Message: lib/pipeline: Defining dependency "pipeline" 00:01:00.141 Message: lib/graph: Defining dependency "graph" 00:01:00.141 Message: lib/node: Defining dependency "node" 00:01:00.141 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:00.141 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:00.141 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:00.141 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:00.141 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:00.141 Compiler for C supports arguments -Wno-unused-value: YES 00:01:00.141 Compiler for C supports arguments -Wno-format: YES 00:01:01.089 Compiler for C supports arguments -Wno-format-security: YES 00:01:01.089 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:01.089 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:01.089 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:01.089 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:01.089 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:01.089 Compiler for C supports arguments -mavx2: YES (cached) 00:01:01.089 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:01.089 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:01.089 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:01.089 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:01.089 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:01.089 Program doxygen found: YES (/usr/bin/doxygen) 00:01:01.089 Configuring doxy-api.conf using configuration 00:01:01.089 Program sphinx-build found: NO 00:01:01.089 Configuring rte_build_config.h using configuration 00:01:01.089 Message: 00:01:01.089 ================= 00:01:01.089 Applications Enabled 00:01:01.089 ================= 00:01:01.089 00:01:01.089 apps: 00:01:01.089 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:01.089 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:01.089 test-security-perf, 00:01:01.089 00:01:01.089 Message: 00:01:01.089 ================= 00:01:01.089 Libraries Enabled 00:01:01.089 ================= 00:01:01.089 00:01:01.089 libs: 00:01:01.089 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:01.089 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:01.089 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:01.089 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:01.089 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:01.089 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:01.089 table, pipeline, graph, node, 00:01:01.089 00:01:01.089 Message: 00:01:01.089 =============== 00:01:01.089 Drivers Enabled 00:01:01.089 =============== 00:01:01.089 00:01:01.089 common: 00:01:01.089 00:01:01.089 bus: 00:01:01.089 pci, vdev, 00:01:01.089 mempool: 00:01:01.089 ring, 00:01:01.089 dma: 00:01:01.089 00:01:01.089 net: 00:01:01.089 i40e, 00:01:01.089 raw: 00:01:01.089 00:01:01.089 crypto: 00:01:01.089 00:01:01.089 compress: 00:01:01.089 00:01:01.089 regex: 00:01:01.089 00:01:01.089 vdpa: 00:01:01.089 00:01:01.089 event: 00:01:01.089 00:01:01.089 baseband: 00:01:01.089 00:01:01.089 gpu: 00:01:01.089 00:01:01.089 00:01:01.089 Message: 00:01:01.089 ================= 00:01:01.089 Content Skipped 00:01:01.089 ================= 00:01:01.089 00:01:01.089 apps: 00:01:01.089 00:01:01.089 libs: 00:01:01.089 kni: explicitly disabled via build config (deprecated lib) 00:01:01.089 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:01.089 00:01:01.089 drivers: 00:01:01.089 common/cpt: not in enabled drivers build config 00:01:01.089 common/dpaax: not in enabled drivers build config 00:01:01.090 common/iavf: not in enabled drivers build config 00:01:01.090 common/idpf: not in enabled drivers build config 00:01:01.090 common/mvep: not in enabled drivers build config 00:01:01.090 common/octeontx: not in enabled drivers build config 00:01:01.090 bus/auxiliary: not in enabled drivers build config 00:01:01.090 bus/dpaa: not in enabled drivers build config 00:01:01.090 bus/fslmc: not in enabled drivers build config 00:01:01.090 bus/ifpga: not in enabled drivers build config 00:01:01.090 bus/vmbus: not in enabled drivers build config 00:01:01.090 common/cnxk: not in enabled drivers build config 00:01:01.090 common/mlx5: not in enabled drivers build config 00:01:01.090 common/qat: not in enabled drivers build config 00:01:01.090 common/sfc_efx: not in enabled drivers build config 00:01:01.090 mempool/bucket: not in enabled drivers build config 00:01:01.090 mempool/cnxk: not in enabled drivers build config 00:01:01.090 mempool/dpaa: not in enabled drivers build config 00:01:01.090 mempool/dpaa2: not in enabled drivers build config 00:01:01.090 mempool/octeontx: not in enabled drivers build config 00:01:01.090 mempool/stack: not in enabled drivers build config 00:01:01.090 dma/cnxk: not in enabled drivers build config 00:01:01.090 dma/dpaa: not in enabled drivers build config 00:01:01.090 dma/dpaa2: not in enabled drivers build config 00:01:01.090 dma/hisilicon: not in enabled drivers build config 00:01:01.090 dma/idxd: not in enabled drivers build config 00:01:01.090 dma/ioat: not in enabled drivers build config 00:01:01.090 dma/skeleton: not in enabled drivers build config 00:01:01.090 net/af_packet: not in enabled drivers build config 00:01:01.090 net/af_xdp: not in enabled drivers build config 00:01:01.090 net/ark: not in enabled drivers build config 00:01:01.090 net/atlantic: not in enabled drivers build config 00:01:01.090 net/avp: not in enabled drivers build config 00:01:01.090 net/axgbe: not in enabled drivers build config 00:01:01.090 net/bnx2x: not in enabled drivers build config 00:01:01.090 net/bnxt: not in enabled drivers build config 00:01:01.090 net/bonding: not in enabled drivers build config 00:01:01.090 net/cnxk: not in enabled drivers build config 00:01:01.090 net/cxgbe: not in enabled drivers build config 00:01:01.090 net/dpaa: not in enabled drivers build config 00:01:01.090 net/dpaa2: not in enabled drivers build config 00:01:01.090 net/e1000: not in enabled drivers build config 00:01:01.090 net/ena: not in enabled drivers build config 00:01:01.090 net/enetc: not in enabled drivers build config 00:01:01.090 net/enetfec: not in enabled drivers build config 00:01:01.090 net/enic: not in enabled drivers build config 00:01:01.090 net/failsafe: not in enabled drivers build config 00:01:01.090 net/fm10k: not in enabled drivers build config 00:01:01.090 net/gve: not in enabled drivers build config 00:01:01.090 net/hinic: not in enabled drivers build config 00:01:01.090 net/hns3: not in enabled drivers build config 00:01:01.090 net/iavf: not in enabled drivers build config 00:01:01.090 net/ice: not in enabled drivers build config 00:01:01.090 net/idpf: not in enabled drivers build config 00:01:01.090 net/igc: not in enabled drivers build config 00:01:01.090 net/ionic: not in enabled drivers build config 00:01:01.090 net/ipn3ke: not in enabled drivers build config 00:01:01.090 net/ixgbe: not in enabled drivers build config 00:01:01.090 net/kni: not in enabled drivers build config 00:01:01.090 net/liquidio: not in enabled drivers build config 00:01:01.090 net/mana: not in enabled drivers build config 00:01:01.090 net/memif: not in enabled drivers build config 00:01:01.090 net/mlx4: not in enabled drivers build config 00:01:01.090 net/mlx5: not in enabled drivers build config 00:01:01.090 net/mvneta: not in enabled drivers build config 00:01:01.090 net/mvpp2: not in enabled drivers build config 00:01:01.090 net/netvsc: not in enabled drivers build config 00:01:01.090 net/nfb: not in enabled drivers build config 00:01:01.090 net/nfp: not in enabled drivers build config 00:01:01.090 net/ngbe: not in enabled drivers build config 00:01:01.090 net/null: not in enabled drivers build config 00:01:01.090 net/octeontx: not in enabled drivers build config 00:01:01.090 net/octeon_ep: not in enabled drivers build config 00:01:01.090 net/pcap: not in enabled drivers build config 00:01:01.090 net/pfe: not in enabled drivers build config 00:01:01.090 net/qede: not in enabled drivers build config 00:01:01.090 net/ring: not in enabled drivers build config 00:01:01.090 net/sfc: not in enabled drivers build config 00:01:01.090 net/softnic: not in enabled drivers build config 00:01:01.090 net/tap: not in enabled drivers build config 00:01:01.090 net/thunderx: not in enabled drivers build config 00:01:01.090 net/txgbe: not in enabled drivers build config 00:01:01.090 net/vdev_netvsc: not in enabled drivers build config 00:01:01.090 net/vhost: not in enabled drivers build config 00:01:01.090 net/virtio: not in enabled drivers build config 00:01:01.090 net/vmxnet3: not in enabled drivers build config 00:01:01.090 raw/cnxk_bphy: not in enabled drivers build config 00:01:01.090 raw/cnxk_gpio: not in enabled drivers build config 00:01:01.090 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:01.090 raw/ifpga: not in enabled drivers build config 00:01:01.090 raw/ntb: not in enabled drivers build config 00:01:01.090 raw/skeleton: not in enabled drivers build config 00:01:01.090 crypto/armv8: not in enabled drivers build config 00:01:01.090 crypto/bcmfs: not in enabled drivers build config 00:01:01.090 crypto/caam_jr: not in enabled drivers build config 00:01:01.090 crypto/ccp: not in enabled drivers build config 00:01:01.090 crypto/cnxk: not in enabled drivers build config 00:01:01.090 crypto/dpaa_sec: not in enabled drivers build config 00:01:01.090 crypto/dpaa2_sec: not in enabled drivers build config 00:01:01.090 crypto/ipsec_mb: not in enabled drivers build config 00:01:01.090 crypto/mlx5: not in enabled drivers build config 00:01:01.090 crypto/mvsam: not in enabled drivers build config 00:01:01.090 crypto/nitrox: not in enabled drivers build config 00:01:01.090 crypto/null: not in enabled drivers build config 00:01:01.090 crypto/octeontx: not in enabled drivers build config 00:01:01.090 crypto/openssl: not in enabled drivers build config 00:01:01.090 crypto/scheduler: not in enabled drivers build config 00:01:01.090 crypto/uadk: not in enabled drivers build config 00:01:01.090 crypto/virtio: not in enabled drivers build config 00:01:01.090 compress/isal: not in enabled drivers build config 00:01:01.090 compress/mlx5: not in enabled drivers build config 00:01:01.090 compress/octeontx: not in enabled drivers build config 00:01:01.090 compress/zlib: not in enabled drivers build config 00:01:01.090 regex/mlx5: not in enabled drivers build config 00:01:01.090 regex/cn9k: not in enabled drivers build config 00:01:01.090 vdpa/ifc: not in enabled drivers build config 00:01:01.090 vdpa/mlx5: not in enabled drivers build config 00:01:01.090 vdpa/sfc: not in enabled drivers build config 00:01:01.090 event/cnxk: not in enabled drivers build config 00:01:01.090 event/dlb2: not in enabled drivers build config 00:01:01.090 event/dpaa: not in enabled drivers build config 00:01:01.090 event/dpaa2: not in enabled drivers build config 00:01:01.090 event/dsw: not in enabled drivers build config 00:01:01.090 event/opdl: not in enabled drivers build config 00:01:01.090 event/skeleton: not in enabled drivers build config 00:01:01.090 event/sw: not in enabled drivers build config 00:01:01.090 event/octeontx: not in enabled drivers build config 00:01:01.090 baseband/acc: not in enabled drivers build config 00:01:01.090 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:01.090 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:01.090 baseband/la12xx: not in enabled drivers build config 00:01:01.090 baseband/null: not in enabled drivers build config 00:01:01.090 baseband/turbo_sw: not in enabled drivers build config 00:01:01.090 gpu/cuda: not in enabled drivers build config 00:01:01.090 00:01:01.090 00:01:01.090 Build targets in project: 316 00:01:01.090 00:01:01.090 DPDK 22.11.4 00:01:01.090 00:01:01.090 User defined options 00:01:01.090 libdir : lib 00:01:01.090 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:01.090 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:01.090 c_link_args : 00:01:01.090 enable_docs : false 00:01:01.090 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:01.090 enable_kmods : false 00:01:01.090 machine : native 00:01:01.090 tests : false 00:01:01.090 00:01:01.090 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:01.090 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:01.090 13:59:28 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j48 00:01:01.090 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:01.090 [1/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:01.090 [2/745] Generating lib/rte_telemetry_def with a custom command 00:01:01.090 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:01.091 [4/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:01.091 [5/745] Generating lib/rte_kvargs_def with a custom command 00:01:01.091 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:01.091 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:01.091 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:01.091 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:01.091 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:01.091 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:01.091 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:01.091 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:01.091 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:01.091 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:01.091 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:01.091 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:01.355 [18/745] Linking static target lib/librte_kvargs.a 00:01:01.355 [19/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:01.355 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:01.355 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:01.355 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:01.355 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:01.355 [24/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:01.355 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:01.355 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:01.355 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:01.355 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:01.355 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:01.355 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:01.355 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:01.355 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:01.355 [33/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:01.355 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:01.355 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:01.355 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:01.355 [37/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:01.355 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:01.355 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:01.355 [40/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:01.355 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:01.355 [42/745] Generating lib/rte_eal_def with a custom command 00:01:01.355 [43/745] Generating lib/rte_eal_mingw with a custom command 00:01:01.355 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:01.355 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:01.355 [46/745] Generating lib/rte_ring_def with a custom command 00:01:01.355 [47/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:01.355 [48/745] Generating lib/rte_ring_mingw with a custom command 00:01:01.355 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:01.355 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:01.355 [51/745] Generating lib/rte_rcu_def with a custom command 00:01:01.355 [52/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:01.355 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:01.355 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:01.355 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:01.355 [56/745] Generating lib/rte_mempool_def with a custom command 00:01:01.355 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:01.355 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:01.355 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:01.355 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:01.355 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:01.355 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:01.355 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:01.355 [64/745] Generating lib/rte_mempool_mingw with a custom command 00:01:01.355 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:01.618 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:01.618 [67/745] Generating lib/rte_net_def with a custom command 00:01:01.618 [68/745] Generating lib/rte_net_mingw with a custom command 00:01:01.618 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:01.618 [70/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:01.618 [71/745] Generating lib/rte_meter_mingw with a custom command 00:01:01.618 [72/745] Generating lib/rte_meter_def with a custom command 00:01:01.618 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:01.618 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:01.618 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:01.618 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:01.618 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:01.618 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:01.618 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.618 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:01.618 [81/745] Linking static target lib/librte_ring.a 00:01:01.618 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:01.618 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:01.618 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:01.882 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:01.882 [86/745] Generating lib/rte_pci_def with a custom command 00:01:01.882 [87/745] Linking static target lib/librte_meter.a 00:01:01.882 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:01.882 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:01.882 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:01.882 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:01.882 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:01.882 [93/745] Linking static target lib/librte_pci.a 00:01:01.882 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:01.882 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:01.882 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:01.882 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:02.143 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:02.143 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.143 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:02.143 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:02.143 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.143 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:02.143 [104/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:02.143 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:02.143 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:02.143 [107/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:02.143 [108/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.143 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:02.143 [110/745] Generating lib/rte_cmdline_def with a custom command 00:01:02.143 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:02.143 [112/745] Linking static target lib/librte_telemetry.a 00:01:02.143 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:02.143 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:02.143 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:02.143 [116/745] Generating lib/rte_hash_def with a custom command 00:01:02.404 [117/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:02.404 [118/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:02.404 [119/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:02.404 [120/745] Generating lib/rte_hash_mingw with a custom command 00:01:02.404 [121/745] Generating lib/rte_timer_def with a custom command 00:01:02.404 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:02.404 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:02.404 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:02.663 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:02.663 [126/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:02.663 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:02.663 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:02.663 [129/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:02.663 [130/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:02.663 [131/745] Generating lib/rte_acl_def with a custom command 00:01:02.663 [132/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:02.663 [133/745] Generating lib/rte_acl_mingw with a custom command 00:01:02.663 [134/745] Generating lib/rte_bbdev_def with a custom command 00:01:02.663 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:02.663 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:02.663 [137/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:02.663 [138/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:02.663 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:02.664 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:02.664 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:02.664 [142/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.664 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:02.664 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:02.923 [145/745] Linking target lib/librte_telemetry.so.23.0 00:01:02.923 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:02.923 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:02.923 [148/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:02.923 [149/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:02.923 [150/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:02.923 [151/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:02.923 [152/745] Generating lib/rte_bpf_def with a custom command 00:01:02.923 [153/745] Generating lib/rte_bpf_mingw with a custom command 00:01:02.923 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:02.923 [155/745] Generating lib/rte_cfgfile_def with a custom command 00:01:02.923 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:02.923 [157/745] Generating lib/rte_compressdev_def with a custom command 00:01:02.923 [158/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:02.923 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:02.923 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:03.186 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:03.186 [162/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:03.186 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:03.186 [164/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:03.186 [165/745] Generating lib/rte_cryptodev_def with a custom command 00:01:03.186 [166/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:03.186 [167/745] Linking static target lib/librte_rcu.a 00:01:03.186 [168/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:03.186 [169/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:03.186 [170/745] Generating lib/rte_distributor_def with a custom command 00:01:03.186 [171/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:03.186 [172/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:03.186 [173/745] Linking static target lib/librte_timer.a 00:01:03.186 [174/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:03.186 [175/745] Linking static target lib/librte_cmdline.a 00:01:03.186 [176/745] Linking static target lib/librte_net.a 00:01:03.186 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:03.186 [178/745] Generating lib/rte_efd_def with a custom command 00:01:03.186 [179/745] Generating lib/rte_distributor_mingw with a custom command 00:01:03.186 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:03.448 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:03.448 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:03.448 [183/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:03.448 [184/745] Linking static target lib/librte_metrics.a 00:01:03.448 [185/745] Linking static target lib/librte_mempool.a 00:01:03.448 [186/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:03.448 [187/745] Linking static target lib/librte_cfgfile.a 00:01:03.711 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.711 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:03.711 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.711 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.711 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:03.711 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:03.711 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:03.711 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:03.711 [196/745] Linking static target lib/librte_eal.a 00:01:03.976 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:03.976 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:03.976 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:03.976 [200/745] Generating lib/rte_gpudev_def with a custom command 00:01:03.976 [201/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:03.976 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:03.976 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:03.976 [204/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.976 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:03.977 [206/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:03.977 [207/745] Linking static target lib/librte_bitratestats.a 00:01:03.977 [208/745] Generating lib/rte_gro_def with a custom command 00:01:03.977 [209/745] Generating lib/rte_gro_mingw with a custom command 00:01:03.977 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.240 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:04.240 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:04.240 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:04.240 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:04.240 [215/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.240 [216/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:04.505 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:04.505 [218/745] Generating lib/rte_gso_def with a custom command 00:01:04.505 [219/745] Generating lib/rte_gso_mingw with a custom command 00:01:04.505 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:04.505 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:04.505 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:04.505 [223/745] Generating lib/rte_ip_frag_def with a custom command 00:01:04.505 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:04.505 [225/745] Linking static target lib/librte_bbdev.a 00:01:04.505 [226/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:04.505 [227/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.505 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:04.778 [229/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.778 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:04.778 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:04.778 [232/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:04.778 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:04.778 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:04.778 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:04.778 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:04.778 [237/745] Linking static target lib/librte_compressdev.a 00:01:04.778 [238/745] Generating lib/rte_lpm_def with a custom command 00:01:04.778 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:04.778 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:04.778 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:04.778 [242/745] Linking static target lib/librte_jobstats.a 00:01:05.079 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:05.079 [244/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:05.079 [245/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:05.079 [246/745] Generating lib/rte_member_def with a custom command 00:01:05.079 [247/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:05.079 [248/745] Linking static target lib/librte_distributor.a 00:01:05.353 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:05.353 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:05.353 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:05.353 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.353 [253/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:05.353 [254/745] Generating lib/rte_pcapng_def with a custom command 00:01:05.353 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:05.353 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:05.353 [257/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:05.353 [258/745] Linking static target lib/librte_bpf.a 00:01:05.616 [259/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.616 [260/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:05.616 [261/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:05.616 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:05.616 [263/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:05.616 [264/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:05.616 [265/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:05.616 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:05.616 [267/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:05.616 [268/745] Linking static target lib/librte_gpudev.a 00:01:05.616 [269/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.616 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:05.616 [271/745] Generating lib/rte_power_def with a custom command 00:01:05.616 [272/745] Generating lib/rte_power_mingw with a custom command 00:01:05.616 [273/745] Generating lib/rte_rawdev_def with a custom command 00:01:05.616 [274/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:05.616 [275/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:05.616 [276/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:05.616 [277/745] Generating lib/rte_regexdev_def with a custom command 00:01:05.616 [278/745] Linking static target lib/librte_gro.a 00:01:05.880 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:05.880 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:05.880 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:05.880 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:05.880 [283/745] Generating lib/rte_rib_def with a custom command 00:01:05.880 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:05.880 [285/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:05.880 [286/745] Generating lib/rte_reorder_def with a custom command 00:01:05.880 [287/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.880 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:01:05.880 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:06.148 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:06.148 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.148 [292/745] Generating lib/rte_sched_def with a custom command 00:01:06.148 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:06.148 [294/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:06.148 [295/745] Generating lib/rte_sched_mingw with a custom command 00:01:06.148 [296/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.148 [297/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:06.148 [298/745] Generating lib/rte_security_mingw with a custom command 00:01:06.148 [299/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:06.148 [300/745] Generating lib/rte_security_def with a custom command 00:01:06.148 [301/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:06.148 [302/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:06.148 [303/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:06.148 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:06.148 [305/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:06.148 [306/745] Linking static target lib/librte_latencystats.a 00:01:06.148 [307/745] Generating lib/rte_stack_def with a custom command 00:01:06.148 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:06.429 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:06.429 [310/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:06.429 [311/745] Linking static target lib/librte_rawdev.a 00:01:06.429 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:06.429 [313/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:06.429 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:06.429 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:06.429 [316/745] Linking static target lib/librte_stack.a 00:01:06.429 [317/745] Generating lib/rte_vhost_def with a custom command 00:01:06.429 [318/745] Generating lib/rte_vhost_mingw with a custom command 00:01:06.429 [319/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:06.429 [320/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:06.429 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:06.429 [322/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:06.429 [323/745] Linking static target lib/librte_dmadev.a 00:01:06.697 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:06.697 [325/745] Linking static target lib/librte_ip_frag.a 00:01:06.697 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:06.697 [327/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.697 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:06.697 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.697 [330/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:06.697 [331/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:06.697 [332/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:06.960 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:06.960 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.960 [335/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:06.960 [336/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.221 [337/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.221 [338/745] Generating lib/rte_fib_def with a custom command 00:01:07.221 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:07.221 [340/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:07.221 [341/745] Linking static target lib/librte_gso.a 00:01:07.221 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:07.221 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:07.221 [344/745] Linking static target lib/librte_regexdev.a 00:01:07.221 [345/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:07.485 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.485 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:07.485 [348/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.485 [349/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:07.485 [350/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:07.485 [351/745] Linking static target lib/librte_pcapng.a 00:01:07.485 [352/745] Linking static target lib/librte_efd.a 00:01:07.745 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:07.745 [354/745] Linking static target lib/librte_lpm.a 00:01:07.745 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:07.745 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:07.745 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:07.745 [358/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:07.745 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:07.745 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:07.745 [361/745] Linking static target lib/librte_reorder.a 00:01:08.008 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.008 [363/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:08.008 [364/745] Generating lib/rte_port_def with a custom command 00:01:08.008 [365/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:08.008 [366/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:08.008 [367/745] Generating lib/rte_port_mingw with a custom command 00:01:08.008 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:08.008 [369/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.008 [370/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.008 [371/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:08.008 [372/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:08.008 [373/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:08.008 [374/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:08.008 [375/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:08.008 [376/745] Generating lib/rte_pdump_mingw with a custom command 00:01:08.008 [377/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:08.008 [378/745] Generating lib/rte_pdump_def with a custom command 00:01:08.269 [379/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.269 [380/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:08.269 [381/745] Linking static target lib/librte_security.a 00:01:08.269 [382/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:08.269 [383/745] Linking static target lib/librte_power.a 00:01:08.269 [384/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.269 [385/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.269 [386/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.531 [387/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:08.531 [388/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.531 [389/745] Linking static target lib/librte_hash.a 00:01:08.531 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:08.531 [391/745] Linking static target lib/librte_rib.a 00:01:08.531 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:08.532 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:08.532 [394/745] Linking static target lib/librte_acl.a 00:01:08.532 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:08.795 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:08.795 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:08.795 [398/745] Generating lib/rte_table_def with a custom command 00:01:08.795 [399/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.795 [400/745] Generating lib/rte_table_mingw with a custom command 00:01:09.062 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:09.062 [402/745] Linking static target lib/librte_ethdev.a 00:01:09.062 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.062 [404/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:09.323 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.323 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:09.323 [407/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:09.323 [408/745] Linking static target lib/librte_mbuf.a 00:01:09.323 [409/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:09.323 [410/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:09.323 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:09.323 [412/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:09.323 [413/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:09.323 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:09.323 [415/745] Generating lib/rte_pipeline_def with a custom command 00:01:09.323 [416/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:09.323 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:09.583 [418/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:09.583 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:09.583 [420/745] Generating lib/rte_graph_def with a custom command 00:01:09.583 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:09.583 [422/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:09.583 [423/745] Linking static target lib/librte_fib.a 00:01:09.583 [424/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.583 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:09.583 [426/745] Linking static target lib/librte_eventdev.a 00:01:09.864 [427/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.864 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:09.864 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:09.864 [430/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:09.864 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:09.864 [432/745] Linking static target lib/librte_member.a 00:01:09.864 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:09.864 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:09.864 [435/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:09.864 [436/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:09.864 [437/745] Generating lib/rte_node_def with a custom command 00:01:09.864 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:09.864 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:10.127 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.127 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:10.127 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:10.127 [443/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:10.127 [444/745] Linking static target lib/librte_sched.a 00:01:10.127 [445/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.127 [446/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:10.127 [447/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:10.127 [448/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:10.127 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:10.127 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:10.389 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:10.389 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:10.389 [453/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:10.389 [454/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:10.389 [455/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:10.389 [456/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.389 [457/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:10.389 [458/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:10.389 [459/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:10.389 [460/745] Linking static target lib/librte_cryptodev.a 00:01:10.389 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:10.651 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:10.651 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:10.651 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:10.651 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:10.651 [466/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:10.651 [467/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:10.651 [468/745] Linking static target lib/librte_pdump.a 00:01:10.651 [469/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:10.651 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:10.651 [471/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:10.651 [472/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:10.651 [473/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:10.918 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:10.918 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:10.918 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:10.918 [477/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.918 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:10.918 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:10.918 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:10.918 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:11.179 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:11.179 [483/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:11.179 [484/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.179 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:11.179 [486/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:11.179 [487/745] Linking static target drivers/librte_bus_vdev.a 00:01:11.179 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:11.179 [489/745] Linking static target lib/librte_table.a 00:01:11.179 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:11.179 [491/745] Linking static target lib/librte_ipsec.a 00:01:11.439 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:11.439 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:11.439 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:11.439 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.706 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:11.706 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:11.706 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:11.706 [499/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:11.706 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:11.706 [501/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.706 [502/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:11.966 [503/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:11.966 [504/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:11.966 [505/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:11.966 [506/745] Linking static target lib/librte_graph.a 00:01:11.966 [507/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:11.966 [508/745] Linking static target drivers/librte_bus_pci.a 00:01:11.966 [509/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:11.966 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:11.966 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:12.234 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:12.234 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:12.234 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.492 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:12.492 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.758 [517/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.758 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:12.758 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:12.758 [520/745] Linking static target lib/librte_port.a 00:01:12.758 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:12.758 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:13.018 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.018 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.018 [525/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:13.018 [526/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:13.280 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:13.280 [528/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:13.280 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:13.280 [530/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.280 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.280 [532/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.280 [533/745] Linking static target drivers/librte_mempool_ring.a 00:01:13.542 [534/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:13.542 [535/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:13.542 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:13.542 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:13.542 [538/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.805 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:13.805 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:13.805 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.068 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:14.068 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:14.068 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:14.068 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:14.330 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:14.330 [547/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:14.330 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:14.590 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:14.590 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:14.590 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:14.854 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:14.854 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:14.854 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:14.854 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:15.115 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:15.115 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:15.376 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:15.376 [559/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:15.638 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:15.638 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:15.638 [562/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:15.638 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:15.897 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:15.897 [565/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:15.897 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:15.897 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:15.897 [568/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:16.162 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:16.162 [570/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:16.162 [571/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:16.162 [572/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:16.162 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:16.162 [574/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:16.431 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:16.431 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:16.431 [577/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.431 [578/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:16.431 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:16.431 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:16.431 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:16.431 [582/745] Linking target lib/librte_eal.so.23.0 00:01:16.689 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:16.689 [584/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:16.689 [585/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:16.689 [586/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:16.689 [587/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:16.951 [588/745] Linking target lib/librte_ring.so.23.0 00:01:16.951 [589/745] Linking target lib/librte_meter.so.23.0 00:01:16.951 [590/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.214 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:17.214 [592/745] Linking target lib/librte_pci.so.23.0 00:01:17.214 [593/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:17.214 [594/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:17.214 [595/745] Linking target lib/librte_timer.so.23.0 00:01:17.214 [596/745] Linking target lib/librte_rcu.so.23.0 00:01:17.214 [597/745] Linking target lib/librte_mempool.so.23.0 00:01:17.214 [598/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:17.214 [599/745] Linking target lib/librte_acl.so.23.0 00:01:17.476 [600/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:17.476 [601/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:17.476 [602/745] Linking target lib/librte_cfgfile.so.23.0 00:01:17.476 [603/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:17.476 [604/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:17.476 [605/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:17.476 [606/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:17.476 [607/745] Linking target lib/librte_rawdev.so.23.0 00:01:17.476 [608/745] Linking target lib/librte_jobstats.so.23.0 00:01:17.476 [609/745] Linking target lib/librte_dmadev.so.23.0 00:01:17.476 [610/745] Linking target lib/librte_stack.so.23.0 00:01:17.476 [611/745] Linking target lib/librte_mbuf.so.23.0 00:01:17.476 [612/745] Linking target lib/librte_rib.so.23.0 00:01:17.476 [613/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:17.476 [614/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:17.476 [615/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:17.476 [616/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:17.476 [617/745] Linking target lib/librte_graph.so.23.0 00:01:17.739 [618/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:17.739 [619/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:17.739 [620/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:17.739 [621/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:17.739 [622/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:17.739 [623/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:17.739 [624/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:17.739 [625/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:17.739 [626/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:17.739 [627/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:17.739 [628/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:17.739 [629/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:17.999 [630/745] Linking target lib/librte_bbdev.so.23.0 00:01:17.999 [631/745] Linking target lib/librte_distributor.so.23.0 00:01:17.999 [632/745] Linking target lib/librte_compressdev.so.23.0 00:01:17.999 [633/745] Linking target lib/librte_gpudev.so.23.0 00:01:17.999 [634/745] Linking target lib/librte_net.so.23.0 00:01:17.999 [635/745] Linking target lib/librte_regexdev.so.23.0 00:01:17.999 [636/745] Linking target lib/librte_cryptodev.so.23.0 00:01:17.999 [637/745] Linking target lib/librte_reorder.so.23.0 00:01:17.999 [638/745] Linking target lib/librte_sched.so.23.0 00:01:17.999 [639/745] Linking target lib/librte_fib.so.23.0 00:01:17.999 [640/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:17.999 [641/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:17.999 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:17.999 [643/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:17.999 [644/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:18.259 [645/745] Linking target lib/librte_hash.so.23.0 00:01:18.259 [646/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:18.259 [647/745] Linking target lib/librte_security.so.23.0 00:01:18.259 [648/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:18.259 [649/745] Linking target lib/librte_cmdline.so.23.0 00:01:18.259 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:18.259 [651/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:18.259 [652/745] Linking target lib/librte_ethdev.so.23.0 00:01:18.259 [653/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:18.259 [654/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:18.259 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:18.259 [656/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:18.259 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:18.259 [658/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:18.259 [659/745] Linking target lib/librte_efd.so.23.0 00:01:18.259 [660/745] Linking target lib/librte_lpm.so.23.0 00:01:18.259 [661/745] Linking target lib/librte_member.so.23.0 00:01:18.259 [662/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:18.259 [663/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:18.517 [664/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:18.517 [665/745] Linking target lib/librte_metrics.so.23.0 00:01:18.517 [666/745] Linking target lib/librte_pcapng.so.23.0 00:01:18.517 [667/745] Linking target lib/librte_gso.so.23.0 00:01:18.517 [668/745] Linking target lib/librte_ip_frag.so.23.0 00:01:18.517 [669/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:18.517 [670/745] Linking target lib/librte_bpf.so.23.0 00:01:18.517 [671/745] Linking target lib/librte_gro.so.23.0 00:01:18.517 [672/745] Linking target lib/librte_power.so.23.0 00:01:18.517 [673/745] Linking target lib/librte_ipsec.so.23.0 00:01:18.517 [674/745] Linking target lib/librte_eventdev.so.23.0 00:01:18.517 [675/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:18.517 [676/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:18.517 [677/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:18.518 [678/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:18.813 [679/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:18.813 [680/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:18.813 [681/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:18.813 [682/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:18.813 [683/745] Linking target lib/librte_latencystats.so.23.0 00:01:18.813 [684/745] Linking target lib/librte_bitratestats.so.23.0 00:01:18.813 [685/745] Linking target lib/librte_pdump.so.23.0 00:01:18.813 [686/745] Linking target lib/librte_port.so.23.0 00:01:18.813 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:18.813 [688/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:18.813 [689/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:18.813 [690/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:18.813 [691/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:19.072 [692/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:19.072 [693/745] Linking target lib/librte_table.so.23.0 00:01:19.072 [694/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:19.072 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:19.330 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:19.330 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:19.330 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:19.897 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:19.897 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:19.897 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:19.897 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:20.155 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:20.413 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:20.413 [705/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:20.413 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:20.413 [707/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:20.413 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:20.671 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:20.929 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:20.929 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.929 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:22.302 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:22.302 [714/745] Linking static target lib/librte_node.a 00:01:22.302 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.302 [716/745] Linking target lib/librte_node.so.23.0 00:01:22.302 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:22.868 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:23.449 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:31.555 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:03.631 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.631 [722/745] Linking static target lib/librte_vhost.a 00:02:03.631 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.631 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:13.649 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:13.649 [726/745] Linking static target lib/librte_pipeline.a 00:02:14.214 [727/745] Linking target app/dpdk-dumpcap 00:02:14.214 [728/745] Linking target app/dpdk-test-security-perf 00:02:14.214 [729/745] Linking target app/dpdk-test-fib 00:02:14.214 [730/745] Linking target app/dpdk-test-regex 00:02:14.214 [731/745] Linking target app/dpdk-test-flow-perf 00:02:14.214 [732/745] Linking target app/dpdk-test-pipeline 00:02:14.214 [733/745] Linking target app/dpdk-test-sad 00:02:14.214 [734/745] Linking target app/dpdk-proc-info 00:02:14.214 [735/745] Linking target app/dpdk-test-gpudev 00:02:14.214 [736/745] Linking target app/dpdk-test-cmdline 00:02:14.214 [737/745] Linking target app/dpdk-test-bbdev 00:02:14.214 [738/745] Linking target app/dpdk-test-compress-perf 00:02:14.215 [739/745] Linking target app/dpdk-pdump 00:02:14.215 [740/745] Linking target app/dpdk-test-acl 00:02:14.215 [741/745] Linking target app/dpdk-test-eventdev 00:02:14.215 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:14.215 [743/745] Linking target app/dpdk-testpmd 00:02:16.112 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.112 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:16.112 14:00:43 build_native_dpdk -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j48 install 00:02:16.112 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:16.112 [0/1] Installing files. 00:02:16.378 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.378 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.379 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.380 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:16.381 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:16.381 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.381 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.951 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:16.952 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:16.952 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:16.952 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:16.952 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:16.952 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.952 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.953 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:16.954 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:16.954 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:16.954 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:16.954 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:16.954 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:16.954 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:16.954 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:16.954 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:16.954 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:16.954 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:16.954 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:16.954 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:16.954 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:16.954 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:16.954 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:16.954 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:16.954 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:16.954 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:16.954 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:16.954 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:16.954 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:16.954 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:16.954 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:16.954 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:16.954 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:16.954 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:16.954 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:16.954 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:16.954 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:16.954 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:16.954 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:16.954 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:16.954 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:16.954 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:16.954 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:16.954 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:16.954 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:16.954 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:16.954 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:16.954 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:16.954 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:16.954 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:16.954 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:16.954 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:16.954 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:16.954 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:16.954 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:16.954 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:16.954 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:16.954 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:16.954 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:16.954 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:16.954 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:16.954 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:16.954 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:16.954 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:16.954 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:16.954 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:16.954 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:16.954 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:16.954 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:16.954 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:16.954 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:16.954 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:16.954 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:16.954 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:16.954 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:16.954 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:16.954 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:16.954 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:16.954 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:16.954 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:16.954 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:16.954 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:16.954 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:16.954 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:16.954 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:16.954 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:16.954 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:16.954 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:16.954 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:16.954 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:16.954 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:16.954 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:16.954 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:16.955 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:16.955 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:16.955 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:16.955 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:16.955 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:16.955 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:16.955 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:16.955 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:16.955 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:16.955 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:16.955 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:16.955 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:16.955 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:16.955 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:16.955 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:16.955 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:16.955 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:16.955 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:16.955 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:16.955 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:16.955 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:16.955 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:16.955 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:16.955 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:16.955 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:16.955 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:16.955 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:16.955 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:16.955 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:16.955 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:16.955 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:16.955 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:16.955 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:16.955 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:16.955 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:16.955 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:16.955 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:16.955 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:16.955 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:16.955 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:16.955 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:16.955 14:00:44 build_native_dpdk -- common/autobuild_common.sh@192 -- $ uname -s 00:02:16.955 14:00:44 build_native_dpdk -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:16.955 14:00:44 build_native_dpdk -- common/autobuild_common.sh@203 -- $ cat 00:02:16.955 14:00:44 build_native_dpdk -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:16.955 00:02:16.955 real 1m21.224s 00:02:16.955 user 14m33.038s 00:02:16.955 sys 1m47.722s 00:02:16.955 14:00:44 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:16.955 14:00:44 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:16.955 ************************************ 00:02:16.955 END TEST build_native_dpdk 00:02:16.955 ************************************ 00:02:16.955 14:00:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:16.955 14:00:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:16.955 14:00:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:16.955 14:00:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:16.955 14:00:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:16.955 14:00:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:16.955 14:00:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:16.955 14:00:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:16.955 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:17.212 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:17.212 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:17.212 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:17.469 Using 'verbs' RDMA provider 00:02:28.013 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:36.122 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:36.380 Creating mk/config.mk...done. 00:02:36.380 Creating mk/cc.flags.mk...done. 00:02:36.380 Type 'make' to build. 00:02:36.380 14:01:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:36.380 14:01:03 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:36.380 14:01:03 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:36.380 14:01:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.380 ************************************ 00:02:36.380 START TEST make 00:02:36.380 ************************************ 00:02:36.380 14:01:03 make -- common/autotest_common.sh@1121 -- $ make -j48 00:02:36.637 make[1]: Nothing to be done for 'all'. 00:02:51.582 CC lib/log/log.o 00:02:51.582 CC lib/log/log_flags.o 00:02:51.582 CC lib/log/log_deprecated.o 00:02:51.582 CC lib/ut_mock/mock.o 00:02:51.582 CC lib/ut/ut.o 00:02:51.582 LIB libspdk_log.a 00:02:51.582 LIB libspdk_ut_mock.a 00:02:51.582 LIB libspdk_ut.a 00:02:51.582 SO libspdk_log.so.7.0 00:02:51.582 SO libspdk_ut_mock.so.6.0 00:02:51.582 SO libspdk_ut.so.2.0 00:02:51.582 SYMLINK libspdk_ut_mock.so 00:02:51.582 SYMLINK libspdk_ut.so 00:02:51.582 SYMLINK libspdk_log.so 00:02:51.840 CXX lib/trace_parser/trace.o 00:02:51.840 CC lib/dma/dma.o 00:02:51.840 CC lib/ioat/ioat.o 00:02:51.840 CC lib/util/base64.o 00:02:51.840 CC lib/util/bit_array.o 00:02:51.840 CC lib/util/cpuset.o 00:02:51.840 CC lib/util/crc16.o 00:02:51.840 CC lib/util/crc32.o 00:02:51.840 CC lib/util/crc32c.o 00:02:51.840 CC lib/util/crc32_ieee.o 00:02:51.840 CC lib/util/crc64.o 00:02:51.840 CC lib/util/dif.o 00:02:51.840 CC lib/util/fd.o 00:02:51.840 CC lib/util/file.o 00:02:51.840 CC lib/util/hexlify.o 00:02:51.840 CC lib/util/iov.o 00:02:51.840 CC lib/util/math.o 00:02:51.840 CC lib/util/pipe.o 00:02:51.840 CC lib/util/strerror_tls.o 00:02:51.840 CC lib/util/string.o 00:02:51.840 CC lib/util/uuid.o 00:02:51.840 CC lib/util/fd_group.o 00:02:51.840 CC lib/util/xor.o 00:02:51.840 CC lib/util/zipf.o 00:02:51.840 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.840 CC lib/vfio_user/host/vfio_user.o 00:02:52.098 LIB libspdk_dma.a 00:02:52.098 SO libspdk_dma.so.4.0 00:02:52.098 SYMLINK libspdk_dma.so 00:02:52.098 LIB libspdk_ioat.a 00:02:52.098 SO libspdk_ioat.so.7.0 00:02:52.098 LIB libspdk_vfio_user.a 00:02:52.098 SYMLINK libspdk_ioat.so 00:02:52.098 SO libspdk_vfio_user.so.5.0 00:02:52.356 SYMLINK libspdk_vfio_user.so 00:02:52.356 LIB libspdk_util.a 00:02:52.356 SO libspdk_util.so.9.0 00:02:52.615 SYMLINK libspdk_util.so 00:02:52.615 CC lib/rdma/common.o 00:02:52.615 CC lib/conf/conf.o 00:02:52.615 CC lib/idxd/idxd.o 00:02:52.615 CC lib/json/json_parse.o 00:02:52.615 CC lib/vmd/vmd.o 00:02:52.615 CC lib/env_dpdk/env.o 00:02:52.615 CC lib/rdma/rdma_verbs.o 00:02:52.615 CC lib/idxd/idxd_user.o 00:02:52.615 CC lib/vmd/led.o 00:02:52.615 CC lib/env_dpdk/memory.o 00:02:52.615 CC lib/json/json_util.o 00:02:52.615 CC lib/idxd/idxd_kernel.o 00:02:52.615 CC lib/env_dpdk/pci.o 00:02:52.615 CC lib/json/json_write.o 00:02:52.615 CC lib/env_dpdk/init.o 00:02:52.615 CC lib/env_dpdk/threads.o 00:02:52.615 CC lib/env_dpdk/pci_ioat.o 00:02:52.615 CC lib/env_dpdk/pci_virtio.o 00:02:52.615 CC lib/env_dpdk/pci_vmd.o 00:02:52.615 CC lib/env_dpdk/pci_idxd.o 00:02:52.615 CC lib/env_dpdk/pci_event.o 00:02:52.615 CC lib/env_dpdk/sigbus_handler.o 00:02:52.615 CC lib/env_dpdk/pci_dpdk.o 00:02:52.615 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.615 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.873 LIB libspdk_trace_parser.a 00:02:52.873 SO libspdk_trace_parser.so.5.0 00:02:52.873 SYMLINK libspdk_trace_parser.so 00:02:52.873 LIB libspdk_conf.a 00:02:53.131 SO libspdk_conf.so.6.0 00:02:53.131 LIB libspdk_json.a 00:02:53.131 SYMLINK libspdk_conf.so 00:02:53.131 SO libspdk_json.so.6.0 00:02:53.131 LIB libspdk_rdma.a 00:02:53.131 SYMLINK libspdk_json.so 00:02:53.131 SO libspdk_rdma.so.6.0 00:02:53.131 SYMLINK libspdk_rdma.so 00:02:53.389 CC lib/jsonrpc/jsonrpc_server.o 00:02:53.389 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:53.389 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.389 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:53.389 LIB libspdk_idxd.a 00:02:53.389 SO libspdk_idxd.so.12.0 00:02:53.389 SYMLINK libspdk_idxd.so 00:02:53.389 LIB libspdk_vmd.a 00:02:53.389 SO libspdk_vmd.so.6.0 00:02:53.389 SYMLINK libspdk_vmd.so 00:02:53.663 LIB libspdk_jsonrpc.a 00:02:53.663 SO libspdk_jsonrpc.so.6.0 00:02:53.663 SYMLINK libspdk_jsonrpc.so 00:02:53.921 CC lib/rpc/rpc.o 00:02:53.921 LIB libspdk_rpc.a 00:02:54.179 SO libspdk_rpc.so.6.0 00:02:54.179 SYMLINK libspdk_rpc.so 00:02:54.179 CC lib/trace/trace.o 00:02:54.179 CC lib/trace/trace_flags.o 00:02:54.179 CC lib/trace/trace_rpc.o 00:02:54.179 CC lib/notify/notify.o 00:02:54.179 CC lib/keyring/keyring.o 00:02:54.179 CC lib/notify/notify_rpc.o 00:02:54.179 CC lib/keyring/keyring_rpc.o 00:02:54.437 LIB libspdk_notify.a 00:02:54.437 SO libspdk_notify.so.6.0 00:02:54.437 SYMLINK libspdk_notify.so 00:02:54.437 LIB libspdk_keyring.a 00:02:54.437 LIB libspdk_trace.a 00:02:54.437 SO libspdk_keyring.so.1.0 00:02:54.437 SO libspdk_trace.so.10.0 00:02:54.695 SYMLINK libspdk_keyring.so 00:02:54.695 SYMLINK libspdk_trace.so 00:02:54.695 LIB libspdk_env_dpdk.a 00:02:54.695 CC lib/thread/thread.o 00:02:54.695 CC lib/thread/iobuf.o 00:02:54.695 CC lib/sock/sock.o 00:02:54.695 CC lib/sock/sock_rpc.o 00:02:54.695 SO libspdk_env_dpdk.so.14.0 00:02:54.953 SYMLINK libspdk_env_dpdk.so 00:02:55.211 LIB libspdk_sock.a 00:02:55.211 SO libspdk_sock.so.9.0 00:02:55.211 SYMLINK libspdk_sock.so 00:02:55.469 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.469 CC lib/nvme/nvme_ctrlr.o 00:02:55.469 CC lib/nvme/nvme_fabric.o 00:02:55.469 CC lib/nvme/nvme_ns_cmd.o 00:02:55.469 CC lib/nvme/nvme_ns.o 00:02:55.469 CC lib/nvme/nvme_pcie_common.o 00:02:55.469 CC lib/nvme/nvme_pcie.o 00:02:55.469 CC lib/nvme/nvme_qpair.o 00:02:55.469 CC lib/nvme/nvme.o 00:02:55.469 CC lib/nvme/nvme_quirks.o 00:02:55.469 CC lib/nvme/nvme_transport.o 00:02:55.469 CC lib/nvme/nvme_discovery.o 00:02:55.469 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.469 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.469 CC lib/nvme/nvme_tcp.o 00:02:55.469 CC lib/nvme/nvme_opal.o 00:02:55.469 CC lib/nvme/nvme_io_msg.o 00:02:55.469 CC lib/nvme/nvme_poll_group.o 00:02:55.469 CC lib/nvme/nvme_zns.o 00:02:55.469 CC lib/nvme/nvme_stubs.o 00:02:55.469 CC lib/nvme/nvme_auth.o 00:02:55.469 CC lib/nvme/nvme_cuse.o 00:02:55.469 CC lib/nvme/nvme_rdma.o 00:02:56.403 LIB libspdk_thread.a 00:02:56.403 SO libspdk_thread.so.10.0 00:02:56.403 SYMLINK libspdk_thread.so 00:02:56.662 CC lib/accel/accel.o 00:02:56.662 CC lib/init/json_config.o 00:02:56.662 CC lib/virtio/virtio.o 00:02:56.662 CC lib/blob/blobstore.o 00:02:56.662 CC lib/accel/accel_rpc.o 00:02:56.662 CC lib/init/subsystem.o 00:02:56.662 CC lib/blob/request.o 00:02:56.662 CC lib/virtio/virtio_vhost_user.o 00:02:56.662 CC lib/accel/accel_sw.o 00:02:56.662 CC lib/init/subsystem_rpc.o 00:02:56.662 CC lib/blob/zeroes.o 00:02:56.662 CC lib/virtio/virtio_vfio_user.o 00:02:56.662 CC lib/init/rpc.o 00:02:56.662 CC lib/virtio/virtio_pci.o 00:02:56.662 CC lib/blob/blob_bs_dev.o 00:02:56.920 LIB libspdk_init.a 00:02:56.920 SO libspdk_init.so.5.0 00:02:56.920 LIB libspdk_virtio.a 00:02:56.920 SYMLINK libspdk_init.so 00:02:56.920 SO libspdk_virtio.so.7.0 00:02:56.920 SYMLINK libspdk_virtio.so 00:02:57.179 CC lib/event/app.o 00:02:57.179 CC lib/event/reactor.o 00:02:57.179 CC lib/event/log_rpc.o 00:02:57.179 CC lib/event/app_rpc.o 00:02:57.179 CC lib/event/scheduler_static.o 00:02:57.437 LIB libspdk_event.a 00:02:57.437 SO libspdk_event.so.13.0 00:02:57.695 SYMLINK libspdk_event.so 00:02:57.695 LIB libspdk_accel.a 00:02:57.695 SO libspdk_accel.so.15.0 00:02:57.695 LIB libspdk_nvme.a 00:02:57.695 SYMLINK libspdk_accel.so 00:02:57.953 SO libspdk_nvme.so.13.0 00:02:57.953 CC lib/bdev/bdev.o 00:02:57.953 CC lib/bdev/bdev_rpc.o 00:02:57.953 CC lib/bdev/bdev_zone.o 00:02:57.953 CC lib/bdev/part.o 00:02:57.953 CC lib/bdev/scsi_nvme.o 00:02:58.212 SYMLINK libspdk_nvme.so 00:02:59.584 LIB libspdk_blob.a 00:02:59.584 SO libspdk_blob.so.11.0 00:02:59.584 SYMLINK libspdk_blob.so 00:02:59.841 CC lib/blobfs/blobfs.o 00:02:59.841 CC lib/blobfs/tree.o 00:02:59.841 CC lib/lvol/lvol.o 00:03:00.416 LIB libspdk_bdev.a 00:03:00.416 SO libspdk_bdev.so.15.0 00:03:00.416 SYMLINK libspdk_bdev.so 00:03:00.683 LIB libspdk_blobfs.a 00:03:00.683 SO libspdk_blobfs.so.10.0 00:03:00.683 SYMLINK libspdk_blobfs.so 00:03:00.683 CC lib/ublk/ublk.o 00:03:00.683 CC lib/nbd/nbd.o 00:03:00.683 CC lib/nvmf/ctrlr.o 00:03:00.683 CC lib/nbd/nbd_rpc.o 00:03:00.683 CC lib/ublk/ublk_rpc.o 00:03:00.683 CC lib/nvmf/ctrlr_discovery.o 00:03:00.683 CC lib/nvmf/ctrlr_bdev.o 00:03:00.683 CC lib/nvmf/subsystem.o 00:03:00.683 CC lib/scsi/dev.o 00:03:00.683 CC lib/nvmf/nvmf.o 00:03:00.683 CC lib/nvmf/nvmf_rpc.o 00:03:00.683 CC lib/scsi/lun.o 00:03:00.683 CC lib/nvmf/transport.o 00:03:00.683 CC lib/scsi/port.o 00:03:00.683 CC lib/nvmf/tcp.o 00:03:00.683 CC lib/ftl/ftl_core.o 00:03:00.683 CC lib/scsi/scsi.o 00:03:00.683 CC lib/nvmf/stubs.o 00:03:00.683 CC lib/ftl/ftl_init.o 00:03:00.683 CC lib/scsi/scsi_bdev.o 00:03:00.683 CC lib/ftl/ftl_layout.o 00:03:00.683 CC lib/scsi/scsi_pr.o 00:03:00.683 CC lib/nvmf/rdma.o 00:03:00.683 CC lib/nvmf/mdns_server.o 00:03:00.683 CC lib/nvmf/auth.o 00:03:00.683 CC lib/ftl/ftl_debug.o 00:03:00.683 CC lib/scsi/scsi_rpc.o 00:03:00.683 CC lib/ftl/ftl_sb.o 00:03:00.683 CC lib/ftl/ftl_io.o 00:03:00.683 CC lib/scsi/task.o 00:03:00.683 CC lib/ftl/ftl_l2p.o 00:03:00.683 CC lib/ftl/ftl_l2p_flat.o 00:03:00.683 CC lib/ftl/ftl_nv_cache.o 00:03:00.683 CC lib/ftl/ftl_band.o 00:03:00.683 CC lib/ftl/ftl_band_ops.o 00:03:00.683 CC lib/ftl/ftl_writer.o 00:03:00.683 CC lib/ftl/ftl_rq.o 00:03:00.683 CC lib/ftl/ftl_reloc.o 00:03:00.683 CC lib/ftl/ftl_l2p_cache.o 00:03:00.683 CC lib/ftl/ftl_p2l.o 00:03:00.683 CC lib/ftl/mngt/ftl_mngt.o 00:03:00.683 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:00.683 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:00.683 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:00.683 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:00.683 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:00.683 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:00.683 LIB libspdk_lvol.a 00:03:00.945 SO libspdk_lvol.so.10.0 00:03:00.945 SYMLINK libspdk_lvol.so 00:03:00.945 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:00.945 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:00.945 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.945 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.945 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.945 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.205 CC lib/ftl/utils/ftl_conf.o 00:03:01.205 CC lib/ftl/utils/ftl_md.o 00:03:01.205 CC lib/ftl/utils/ftl_mempool.o 00:03:01.205 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.205 CC lib/ftl/utils/ftl_property.o 00:03:01.205 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.205 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.205 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.205 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.205 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.205 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.205 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.205 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.205 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.205 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:01.205 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:01.205 CC lib/ftl/base/ftl_base_dev.o 00:03:01.463 CC lib/ftl/base/ftl_base_bdev.o 00:03:01.463 CC lib/ftl/ftl_trace.o 00:03:01.463 LIB libspdk_nbd.a 00:03:01.463 SO libspdk_nbd.so.7.0 00:03:01.463 SYMLINK libspdk_nbd.so 00:03:01.463 LIB libspdk_scsi.a 00:03:01.721 SO libspdk_scsi.so.9.0 00:03:01.721 LIB libspdk_ublk.a 00:03:01.721 SYMLINK libspdk_scsi.so 00:03:01.721 SO libspdk_ublk.so.3.0 00:03:01.721 SYMLINK libspdk_ublk.so 00:03:01.978 CC lib/vhost/vhost.o 00:03:01.978 CC lib/vhost/vhost_rpc.o 00:03:01.978 CC lib/iscsi/conn.o 00:03:01.978 CC lib/iscsi/init_grp.o 00:03:01.978 CC lib/vhost/vhost_scsi.o 00:03:01.978 CC lib/vhost/vhost_blk.o 00:03:01.978 CC lib/iscsi/iscsi.o 00:03:01.978 CC lib/iscsi/md5.o 00:03:01.978 CC lib/vhost/rte_vhost_user.o 00:03:01.978 CC lib/iscsi/param.o 00:03:01.978 CC lib/iscsi/portal_grp.o 00:03:01.978 CC lib/iscsi/tgt_node.o 00:03:01.978 CC lib/iscsi/iscsi_subsystem.o 00:03:01.978 CC lib/iscsi/iscsi_rpc.o 00:03:01.978 CC lib/iscsi/task.o 00:03:02.236 LIB libspdk_ftl.a 00:03:02.236 SO libspdk_ftl.so.9.0 00:03:02.801 SYMLINK libspdk_ftl.so 00:03:03.059 LIB libspdk_vhost.a 00:03:03.059 SO libspdk_vhost.so.8.0 00:03:03.317 LIB libspdk_nvmf.a 00:03:03.317 SYMLINK libspdk_vhost.so 00:03:03.317 SO libspdk_nvmf.so.18.0 00:03:03.317 LIB libspdk_iscsi.a 00:03:03.317 SO libspdk_iscsi.so.8.0 00:03:03.574 SYMLINK libspdk_nvmf.so 00:03:03.574 SYMLINK libspdk_iscsi.so 00:03:03.832 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.832 CC module/sock/posix/posix.o 00:03:03.832 CC module/keyring/linux/keyring.o 00:03:03.832 CC module/scheduler/gscheduler/gscheduler.o 00:03:03.832 CC module/accel/dsa/accel_dsa.o 00:03:03.832 CC module/blob/bdev/blob_bdev.o 00:03:03.832 CC module/keyring/linux/keyring_rpc.o 00:03:03.832 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.832 CC module/accel/dsa/accel_dsa_rpc.o 00:03:03.832 CC module/accel/iaa/accel_iaa.o 00:03:03.832 CC module/accel/ioat/accel_ioat.o 00:03:03.832 CC module/accel/ioat/accel_ioat_rpc.o 00:03:03.832 CC module/accel/iaa/accel_iaa_rpc.o 00:03:03.832 CC module/accel/error/accel_error.o 00:03:03.832 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.832 CC module/accel/error/accel_error_rpc.o 00:03:03.832 CC module/keyring/file/keyring.o 00:03:03.832 CC module/keyring/file/keyring_rpc.o 00:03:03.832 LIB libspdk_env_dpdk_rpc.a 00:03:04.090 SO libspdk_env_dpdk_rpc.so.6.0 00:03:04.090 SYMLINK libspdk_env_dpdk_rpc.so 00:03:04.090 LIB libspdk_keyring_linux.a 00:03:04.090 LIB libspdk_keyring_file.a 00:03:04.090 LIB libspdk_scheduler_gscheduler.a 00:03:04.090 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.090 SO libspdk_keyring_linux.so.1.0 00:03:04.090 SO libspdk_scheduler_gscheduler.so.4.0 00:03:04.090 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:04.090 SO libspdk_keyring_file.so.1.0 00:03:04.090 LIB libspdk_accel_error.a 00:03:04.090 LIB libspdk_accel_ioat.a 00:03:04.090 LIB libspdk_scheduler_dynamic.a 00:03:04.090 LIB libspdk_accel_iaa.a 00:03:04.090 SO libspdk_accel_error.so.2.0 00:03:04.090 SO libspdk_scheduler_dynamic.so.4.0 00:03:04.090 SO libspdk_accel_ioat.so.6.0 00:03:04.090 SYMLINK libspdk_scheduler_gscheduler.so 00:03:04.090 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:04.090 SYMLINK libspdk_keyring_linux.so 00:03:04.090 SYMLINK libspdk_keyring_file.so 00:03:04.090 SO libspdk_accel_iaa.so.3.0 00:03:04.090 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.090 SYMLINK libspdk_accel_error.so 00:03:04.090 LIB libspdk_accel_dsa.a 00:03:04.090 SYMLINK libspdk_accel_ioat.so 00:03:04.090 LIB libspdk_blob_bdev.a 00:03:04.348 SYMLINK libspdk_accel_iaa.so 00:03:04.348 SO libspdk_accel_dsa.so.5.0 00:03:04.348 SO libspdk_blob_bdev.so.11.0 00:03:04.348 SYMLINK libspdk_accel_dsa.so 00:03:04.348 SYMLINK libspdk_blob_bdev.so 00:03:04.610 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.610 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.610 CC module/bdev/gpt/gpt.o 00:03:04.610 CC module/bdev/delay/vbdev_delay.o 00:03:04.610 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.610 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.610 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.610 CC module/bdev/ftl/bdev_ftl.o 00:03:04.610 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.610 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.610 CC module/bdev/raid/bdev_raid.o 00:03:04.610 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.610 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.610 CC module/bdev/aio/bdev_aio.o 00:03:04.610 CC module/bdev/null/bdev_null_rpc.o 00:03:04.610 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.610 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.610 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.610 CC module/bdev/nvme/bdev_nvme.o 00:03:04.610 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.610 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.611 CC module/bdev/null/bdev_null.o 00:03:04.611 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.611 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.611 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.611 CC module/bdev/split/vbdev_split.o 00:03:04.611 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.611 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.611 CC module/bdev/nvme/nvme_rpc.o 00:03:04.611 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.611 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.611 CC module/bdev/malloc/bdev_malloc.o 00:03:04.611 CC module/bdev/error/vbdev_error.o 00:03:04.611 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.611 CC module/bdev/nvme/vbdev_opal.o 00:03:04.611 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.611 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.611 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.611 CC module/bdev/raid/raid0.o 00:03:04.611 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.611 CC module/bdev/raid/raid1.o 00:03:04.611 CC module/bdev/raid/concat.o 00:03:04.872 LIB libspdk_sock_posix.a 00:03:04.872 SO libspdk_sock_posix.so.6.0 00:03:04.872 LIB libspdk_bdev_split.a 00:03:04.872 SYMLINK libspdk_sock_posix.so 00:03:04.872 LIB libspdk_blobfs_bdev.a 00:03:04.872 SO libspdk_bdev_split.so.6.0 00:03:04.872 SO libspdk_blobfs_bdev.so.6.0 00:03:04.872 LIB libspdk_bdev_null.a 00:03:04.872 LIB libspdk_bdev_passthru.a 00:03:05.130 SO libspdk_bdev_null.so.6.0 00:03:05.130 LIB libspdk_bdev_gpt.a 00:03:05.130 SYMLINK libspdk_bdev_split.so 00:03:05.130 SO libspdk_bdev_passthru.so.6.0 00:03:05.130 SYMLINK libspdk_blobfs_bdev.so 00:03:05.130 SO libspdk_bdev_gpt.so.6.0 00:03:05.130 SYMLINK libspdk_bdev_null.so 00:03:05.130 LIB libspdk_bdev_error.a 00:03:05.130 LIB libspdk_bdev_ftl.a 00:03:05.130 LIB libspdk_bdev_aio.a 00:03:05.130 SYMLINK libspdk_bdev_passthru.so 00:03:05.130 SYMLINK libspdk_bdev_gpt.so 00:03:05.130 LIB libspdk_bdev_delay.a 00:03:05.130 SO libspdk_bdev_error.so.6.0 00:03:05.130 SO libspdk_bdev_ftl.so.6.0 00:03:05.130 SO libspdk_bdev_aio.so.6.0 00:03:05.130 LIB libspdk_bdev_iscsi.a 00:03:05.130 SO libspdk_bdev_delay.so.6.0 00:03:05.130 LIB libspdk_bdev_zone_block.a 00:03:05.130 SO libspdk_bdev_iscsi.so.6.0 00:03:05.130 SYMLINK libspdk_bdev_error.so 00:03:05.130 SYMLINK libspdk_bdev_aio.so 00:03:05.130 SYMLINK libspdk_bdev_ftl.so 00:03:05.130 LIB libspdk_bdev_lvol.a 00:03:05.130 SO libspdk_bdev_zone_block.so.6.0 00:03:05.130 LIB libspdk_bdev_malloc.a 00:03:05.130 SYMLINK libspdk_bdev_delay.so 00:03:05.130 SO libspdk_bdev_lvol.so.6.0 00:03:05.130 SO libspdk_bdev_malloc.so.6.0 00:03:05.130 SYMLINK libspdk_bdev_iscsi.so 00:03:05.130 SYMLINK libspdk_bdev_zone_block.so 00:03:05.130 SYMLINK libspdk_bdev_lvol.so 00:03:05.437 SYMLINK libspdk_bdev_malloc.so 00:03:05.437 LIB libspdk_bdev_virtio.a 00:03:05.437 SO libspdk_bdev_virtio.so.6.0 00:03:05.437 SYMLINK libspdk_bdev_virtio.so 00:03:05.717 LIB libspdk_bdev_raid.a 00:03:05.717 SO libspdk_bdev_raid.so.6.0 00:03:05.975 SYMLINK libspdk_bdev_raid.so 00:03:06.914 LIB libspdk_bdev_nvme.a 00:03:06.914 SO libspdk_bdev_nvme.so.7.0 00:03:06.914 SYMLINK libspdk_bdev_nvme.so 00:03:07.172 CC module/event/subsystems/scheduler/scheduler.o 00:03:07.173 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:07.173 CC module/event/subsystems/iobuf/iobuf.o 00:03:07.173 CC module/event/subsystems/vmd/vmd.o 00:03:07.173 CC module/event/subsystems/keyring/keyring.o 00:03:07.173 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:07.173 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:07.173 CC module/event/subsystems/sock/sock.o 00:03:07.431 LIB libspdk_event_keyring.a 00:03:07.431 LIB libspdk_event_vhost_blk.a 00:03:07.431 LIB libspdk_event_sock.a 00:03:07.431 LIB libspdk_event_scheduler.a 00:03:07.431 LIB libspdk_event_vmd.a 00:03:07.431 SO libspdk_event_keyring.so.1.0 00:03:07.431 LIB libspdk_event_iobuf.a 00:03:07.431 SO libspdk_event_vhost_blk.so.3.0 00:03:07.431 SO libspdk_event_sock.so.5.0 00:03:07.431 SO libspdk_event_scheduler.so.4.0 00:03:07.431 SO libspdk_event_vmd.so.6.0 00:03:07.431 SO libspdk_event_iobuf.so.3.0 00:03:07.431 SYMLINK libspdk_event_keyring.so 00:03:07.431 SYMLINK libspdk_event_sock.so 00:03:07.431 SYMLINK libspdk_event_vhost_blk.so 00:03:07.431 SYMLINK libspdk_event_scheduler.so 00:03:07.431 SYMLINK libspdk_event_vmd.so 00:03:07.431 SYMLINK libspdk_event_iobuf.so 00:03:07.689 CC module/event/subsystems/accel/accel.o 00:03:07.948 LIB libspdk_event_accel.a 00:03:07.948 SO libspdk_event_accel.so.6.0 00:03:07.948 SYMLINK libspdk_event_accel.so 00:03:08.206 CC module/event/subsystems/bdev/bdev.o 00:03:08.206 LIB libspdk_event_bdev.a 00:03:08.206 SO libspdk_event_bdev.so.6.0 00:03:08.463 SYMLINK libspdk_event_bdev.so 00:03:08.463 CC module/event/subsystems/scsi/scsi.o 00:03:08.463 CC module/event/subsystems/ublk/ublk.o 00:03:08.463 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:08.463 CC module/event/subsystems/nbd/nbd.o 00:03:08.463 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:08.721 LIB libspdk_event_nbd.a 00:03:08.721 LIB libspdk_event_ublk.a 00:03:08.721 LIB libspdk_event_scsi.a 00:03:08.721 SO libspdk_event_nbd.so.6.0 00:03:08.721 SO libspdk_event_ublk.so.3.0 00:03:08.721 SO libspdk_event_scsi.so.6.0 00:03:08.721 SYMLINK libspdk_event_nbd.so 00:03:08.721 SYMLINK libspdk_event_ublk.so 00:03:08.721 SYMLINK libspdk_event_scsi.so 00:03:08.721 LIB libspdk_event_nvmf.a 00:03:08.721 SO libspdk_event_nvmf.so.6.0 00:03:08.979 SYMLINK libspdk_event_nvmf.so 00:03:08.979 CC module/event/subsystems/iscsi/iscsi.o 00:03:08.979 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:08.979 LIB libspdk_event_vhost_scsi.a 00:03:08.979 LIB libspdk_event_iscsi.a 00:03:08.979 SO libspdk_event_vhost_scsi.so.3.0 00:03:08.979 SO libspdk_event_iscsi.so.6.0 00:03:09.239 SYMLINK libspdk_event_vhost_scsi.so 00:03:09.239 SYMLINK libspdk_event_iscsi.so 00:03:09.239 SO libspdk.so.6.0 00:03:09.239 SYMLINK libspdk.so 00:03:09.504 CXX app/trace/trace.o 00:03:09.504 CC app/spdk_nvme_discover/discovery_aer.o 00:03:09.504 CC app/trace_record/trace_record.o 00:03:09.504 CC app/spdk_nvme_identify/identify.o 00:03:09.504 CC app/spdk_lspci/spdk_lspci.o 00:03:09.504 CC app/spdk_top/spdk_top.o 00:03:09.504 CC app/spdk_nvme_perf/perf.o 00:03:09.504 TEST_HEADER include/spdk/accel.h 00:03:09.504 TEST_HEADER include/spdk/accel_module.h 00:03:09.504 CC test/rpc_client/rpc_client_test.o 00:03:09.504 TEST_HEADER include/spdk/assert.h 00:03:09.504 TEST_HEADER include/spdk/barrier.h 00:03:09.504 TEST_HEADER include/spdk/base64.h 00:03:09.504 TEST_HEADER include/spdk/bdev.h 00:03:09.504 TEST_HEADER include/spdk/bdev_module.h 00:03:09.504 TEST_HEADER include/spdk/bdev_zone.h 00:03:09.504 TEST_HEADER include/spdk/bit_array.h 00:03:09.504 TEST_HEADER include/spdk/bit_pool.h 00:03:09.504 TEST_HEADER include/spdk/blob_bdev.h 00:03:09.504 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:09.504 TEST_HEADER include/spdk/blobfs.h 00:03:09.504 TEST_HEADER include/spdk/blob.h 00:03:09.504 TEST_HEADER include/spdk/conf.h 00:03:09.504 TEST_HEADER include/spdk/config.h 00:03:09.504 TEST_HEADER include/spdk/cpuset.h 00:03:09.504 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:09.504 TEST_HEADER include/spdk/crc16.h 00:03:09.504 TEST_HEADER include/spdk/crc32.h 00:03:09.504 CC app/spdk_dd/spdk_dd.o 00:03:09.504 TEST_HEADER include/spdk/crc64.h 00:03:09.504 CC app/nvmf_tgt/nvmf_main.o 00:03:09.504 TEST_HEADER include/spdk/dif.h 00:03:09.504 TEST_HEADER include/spdk/dma.h 00:03:09.504 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.504 TEST_HEADER include/spdk/endian.h 00:03:09.504 TEST_HEADER include/spdk/env_dpdk.h 00:03:09.504 TEST_HEADER include/spdk/env.h 00:03:09.504 TEST_HEADER include/spdk/event.h 00:03:09.504 CC app/vhost/vhost.o 00:03:09.504 TEST_HEADER include/spdk/fd_group.h 00:03:09.504 TEST_HEADER include/spdk/fd.h 00:03:09.504 TEST_HEADER include/spdk/file.h 00:03:09.504 TEST_HEADER include/spdk/ftl.h 00:03:09.504 TEST_HEADER include/spdk/gpt_spec.h 00:03:09.504 TEST_HEADER include/spdk/hexlify.h 00:03:09.504 TEST_HEADER include/spdk/histogram_data.h 00:03:09.504 TEST_HEADER include/spdk/idxd.h 00:03:09.504 TEST_HEADER include/spdk/idxd_spec.h 00:03:09.504 TEST_HEADER include/spdk/init.h 00:03:09.504 TEST_HEADER include/spdk/ioat.h 00:03:09.504 TEST_HEADER include/spdk/ioat_spec.h 00:03:09.504 CC examples/ioat/verify/verify.o 00:03:09.504 CC app/spdk_tgt/spdk_tgt.o 00:03:09.504 CC examples/accel/perf/accel_perf.o 00:03:09.504 CC examples/ioat/perf/perf.o 00:03:09.504 TEST_HEADER include/spdk/iscsi_spec.h 00:03:09.504 TEST_HEADER include/spdk/json.h 00:03:09.504 CC examples/idxd/perf/perf.o 00:03:09.504 TEST_HEADER include/spdk/jsonrpc.h 00:03:09.504 TEST_HEADER include/spdk/keyring.h 00:03:09.504 CC examples/util/zipf/zipf.o 00:03:09.504 CC test/app/histogram_perf/histogram_perf.o 00:03:09.504 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:09.504 CC examples/sock/hello_world/hello_sock.o 00:03:09.504 TEST_HEADER include/spdk/keyring_module.h 00:03:09.504 CC app/fio/nvme/fio_plugin.o 00:03:09.504 CC examples/nvme/hotplug/hotplug.o 00:03:09.504 CC test/event/event_perf/event_perf.o 00:03:09.504 TEST_HEADER include/spdk/likely.h 00:03:09.504 CC test/thread/poller_perf/poller_perf.o 00:03:09.504 TEST_HEADER include/spdk/log.h 00:03:09.768 CC examples/nvme/reconnect/reconnect.o 00:03:09.768 CC examples/nvme/hello_world/hello_world.o 00:03:09.768 CC examples/vmd/lsvmd/lsvmd.o 00:03:09.768 CC examples/vmd/led/led.o 00:03:09.768 TEST_HEADER include/spdk/lvol.h 00:03:09.768 TEST_HEADER include/spdk/memory.h 00:03:09.768 CC examples/nvme/arbitration/arbitration.o 00:03:09.768 CC test/nvme/aer/aer.o 00:03:09.768 TEST_HEADER include/spdk/mmio.h 00:03:09.768 TEST_HEADER include/spdk/nbd.h 00:03:09.768 TEST_HEADER include/spdk/notify.h 00:03:09.768 TEST_HEADER include/spdk/nvme.h 00:03:09.768 TEST_HEADER include/spdk/nvme_intel.h 00:03:09.768 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:09.768 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:09.768 TEST_HEADER include/spdk/nvme_spec.h 00:03:09.768 TEST_HEADER include/spdk/nvme_zns.h 00:03:09.768 CC examples/blob/hello_world/hello_blob.o 00:03:09.769 CC examples/blob/cli/blobcli.o 00:03:09.769 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:09.769 CC examples/thread/thread/thread_ex.o 00:03:09.769 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:09.769 CC examples/bdev/hello_world/hello_bdev.o 00:03:09.769 CC app/fio/bdev/fio_plugin.o 00:03:09.769 CC examples/nvmf/nvmf/nvmf.o 00:03:09.769 CC test/bdev/bdevio/bdevio.o 00:03:09.769 TEST_HEADER include/spdk/nvmf.h 00:03:09.769 TEST_HEADER include/spdk/nvmf_spec.h 00:03:09.769 TEST_HEADER include/spdk/nvmf_transport.h 00:03:09.769 CC test/accel/dif/dif.o 00:03:09.769 CC test/app/bdev_svc/bdev_svc.o 00:03:09.769 CC examples/bdev/bdevperf/bdevperf.o 00:03:09.769 TEST_HEADER include/spdk/opal.h 00:03:09.769 CC test/dma/test_dma/test_dma.o 00:03:09.769 TEST_HEADER include/spdk/opal_spec.h 00:03:09.769 TEST_HEADER include/spdk/pci_ids.h 00:03:09.769 CC test/blobfs/mkfs/mkfs.o 00:03:09.769 TEST_HEADER include/spdk/pipe.h 00:03:09.769 TEST_HEADER include/spdk/queue.h 00:03:09.769 TEST_HEADER include/spdk/reduce.h 00:03:09.769 TEST_HEADER include/spdk/rpc.h 00:03:09.769 TEST_HEADER include/spdk/scheduler.h 00:03:09.769 TEST_HEADER include/spdk/scsi.h 00:03:09.769 TEST_HEADER include/spdk/scsi_spec.h 00:03:09.769 TEST_HEADER include/spdk/sock.h 00:03:09.769 TEST_HEADER include/spdk/stdinc.h 00:03:09.769 TEST_HEADER include/spdk/string.h 00:03:09.769 TEST_HEADER include/spdk/thread.h 00:03:09.769 LINK spdk_lspci 00:03:09.769 TEST_HEADER include/spdk/trace.h 00:03:09.769 TEST_HEADER include/spdk/trace_parser.h 00:03:09.769 TEST_HEADER include/spdk/tree.h 00:03:09.769 TEST_HEADER include/spdk/ublk.h 00:03:09.769 CC test/lvol/esnap/esnap.o 00:03:09.769 TEST_HEADER include/spdk/util.h 00:03:09.769 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:09.769 TEST_HEADER include/spdk/uuid.h 00:03:09.769 TEST_HEADER include/spdk/version.h 00:03:09.769 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:09.769 CC test/env/mem_callbacks/mem_callbacks.o 00:03:09.769 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:09.769 TEST_HEADER include/spdk/vhost.h 00:03:09.769 TEST_HEADER include/spdk/vmd.h 00:03:09.769 TEST_HEADER include/spdk/xor.h 00:03:09.769 TEST_HEADER include/spdk/zipf.h 00:03:09.769 CXX test/cpp_headers/accel.o 00:03:09.769 LINK rpc_client_test 00:03:09.769 LINK spdk_nvme_discover 00:03:10.033 LINK interrupt_tgt 00:03:10.033 LINK nvmf_tgt 00:03:10.033 LINK histogram_perf 00:03:10.033 LINK event_perf 00:03:10.033 LINK lsvmd 00:03:10.033 LINK poller_perf 00:03:10.033 LINK zipf 00:03:10.033 LINK led 00:03:10.033 LINK spdk_trace_record 00:03:10.033 LINK vhost 00:03:10.033 LINK iscsi_tgt 00:03:10.033 LINK spdk_tgt 00:03:10.033 LINK verify 00:03:10.033 LINK ioat_perf 00:03:10.033 LINK bdev_svc 00:03:10.033 LINK hello_world 00:03:10.033 LINK mkfs 00:03:10.033 LINK hotplug 00:03:10.033 LINK hello_blob 00:03:10.033 LINK hello_sock 00:03:10.297 LINK hello_bdev 00:03:10.297 CXX test/cpp_headers/accel_module.o 00:03:10.297 LINK mem_callbacks 00:03:10.297 LINK thread 00:03:10.297 LINK aer 00:03:10.297 LINK spdk_dd 00:03:10.297 LINK idxd_perf 00:03:10.297 CC test/app/jsoncat/jsoncat.o 00:03:10.297 CXX test/cpp_headers/assert.o 00:03:10.297 LINK nvmf 00:03:10.297 LINK arbitration 00:03:10.297 CC test/app/stub/stub.o 00:03:10.297 LINK spdk_trace 00:03:10.297 LINK reconnect 00:03:10.297 CXX test/cpp_headers/barrier.o 00:03:10.560 LINK test_dma 00:03:10.560 CC test/event/reactor/reactor.o 00:03:10.560 LINK bdevio 00:03:10.560 CC test/event/reactor_perf/reactor_perf.o 00:03:10.560 CC test/env/vtophys/vtophys.o 00:03:10.560 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:10.560 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:10.560 CC examples/nvme/abort/abort.o 00:03:10.560 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:10.560 CXX test/cpp_headers/base64.o 00:03:10.560 LINK dif 00:03:10.560 CXX test/cpp_headers/bdev.o 00:03:10.560 LINK nvme_manage 00:03:10.560 CXX test/cpp_headers/bdev_module.o 00:03:10.560 LINK nvme_fuzz 00:03:10.560 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:10.560 LINK jsoncat 00:03:10.560 CC test/event/app_repeat/app_repeat.o 00:03:10.560 CC test/nvme/reset/reset.o 00:03:10.560 LINK accel_perf 00:03:10.560 CC test/nvme/sgl/sgl.o 00:03:10.560 CXX test/cpp_headers/bdev_zone.o 00:03:10.560 LINK blobcli 00:03:10.560 CXX test/cpp_headers/bit_array.o 00:03:10.828 CXX test/cpp_headers/bit_pool.o 00:03:10.828 CC test/event/scheduler/scheduler.o 00:03:10.828 CC test/nvme/overhead/overhead.o 00:03:10.828 LINK spdk_bdev 00:03:10.828 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:10.828 LINK stub 00:03:10.828 LINK spdk_nvme 00:03:10.828 LINK reactor_perf 00:03:10.828 CXX test/cpp_headers/blob_bdev.o 00:03:10.828 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:10.828 CC test/nvme/e2edp/nvme_dp.o 00:03:10.828 LINK reactor 00:03:10.828 CC test/nvme/err_injection/err_injection.o 00:03:10.828 LINK vtophys 00:03:10.828 CC test/env/memory/memory_ut.o 00:03:10.828 CXX test/cpp_headers/blobfs_bdev.o 00:03:10.828 CC test/env/pci/pci_ut.o 00:03:10.828 CC test/nvme/startup/startup.o 00:03:10.828 LINK cmb_copy 00:03:10.828 CC test/nvme/reserve/reserve.o 00:03:10.828 LINK pmr_persistence 00:03:10.828 CC test/nvme/connect_stress/connect_stress.o 00:03:11.090 LINK app_repeat 00:03:11.090 CXX test/cpp_headers/blobfs.o 00:03:11.090 CXX test/cpp_headers/blob.o 00:03:11.090 CC test/nvme/simple_copy/simple_copy.o 00:03:11.090 CC test/nvme/boot_partition/boot_partition.o 00:03:11.090 CXX test/cpp_headers/conf.o 00:03:11.090 CXX test/cpp_headers/config.o 00:03:11.090 LINK env_dpdk_post_init 00:03:11.090 CXX test/cpp_headers/cpuset.o 00:03:11.090 CXX test/cpp_headers/crc16.o 00:03:11.090 CXX test/cpp_headers/crc32.o 00:03:11.090 CXX test/cpp_headers/crc64.o 00:03:11.090 CXX test/cpp_headers/dif.o 00:03:11.090 CXX test/cpp_headers/dma.o 00:03:11.090 CXX test/cpp_headers/endian.o 00:03:11.090 CXX test/cpp_headers/env_dpdk.o 00:03:11.090 CC test/nvme/compliance/nvme_compliance.o 00:03:11.090 CXX test/cpp_headers/env.o 00:03:11.090 CXX test/cpp_headers/event.o 00:03:11.090 CXX test/cpp_headers/fd_group.o 00:03:11.090 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.090 LINK spdk_nvme_perf 00:03:11.090 CXX test/cpp_headers/fd.o 00:03:11.090 LINK scheduler 00:03:11.090 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.090 CXX test/cpp_headers/file.o 00:03:11.090 LINK spdk_nvme_identify 00:03:11.090 LINK sgl 00:03:11.090 LINK reset 00:03:11.090 CC test/nvme/cuse/cuse.o 00:03:11.090 CC test/nvme/fdp/fdp.o 00:03:11.090 LINK err_injection 00:03:11.090 CXX test/cpp_headers/ftl.o 00:03:11.352 LINK bdevperf 00:03:11.352 LINK startup 00:03:11.352 CXX test/cpp_headers/gpt_spec.o 00:03:11.352 LINK spdk_top 00:03:11.352 LINK abort 00:03:11.352 CXX test/cpp_headers/hexlify.o 00:03:11.352 CXX test/cpp_headers/histogram_data.o 00:03:11.352 CXX test/cpp_headers/idxd.o 00:03:11.352 LINK overhead 00:03:11.352 LINK boot_partition 00:03:11.352 CXX test/cpp_headers/idxd_spec.o 00:03:11.352 LINK nvme_dp 00:03:11.352 CXX test/cpp_headers/init.o 00:03:11.352 LINK connect_stress 00:03:11.352 CXX test/cpp_headers/ioat.o 00:03:11.352 LINK reserve 00:03:11.352 CXX test/cpp_headers/ioat_spec.o 00:03:11.352 CXX test/cpp_headers/iscsi_spec.o 00:03:11.352 CXX test/cpp_headers/json.o 00:03:11.352 CXX test/cpp_headers/jsonrpc.o 00:03:11.352 CXX test/cpp_headers/keyring.o 00:03:11.352 CXX test/cpp_headers/keyring_module.o 00:03:11.352 CXX test/cpp_headers/likely.o 00:03:11.352 LINK simple_copy 00:03:11.616 CXX test/cpp_headers/log.o 00:03:11.616 CXX test/cpp_headers/lvol.o 00:03:11.616 CXX test/cpp_headers/memory.o 00:03:11.616 CXX test/cpp_headers/mmio.o 00:03:11.616 CXX test/cpp_headers/nbd.o 00:03:11.616 CXX test/cpp_headers/notify.o 00:03:11.616 CXX test/cpp_headers/nvme.o 00:03:11.616 CXX test/cpp_headers/nvme_intel.o 00:03:11.616 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.616 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:11.616 LINK fused_ordering 00:03:11.616 LINK doorbell_aers 00:03:11.616 CXX test/cpp_headers/nvme_spec.o 00:03:11.616 CXX test/cpp_headers/nvme_zns.o 00:03:11.616 CXX test/cpp_headers/nvmf_cmd.o 00:03:11.616 LINK vhost_fuzz 00:03:11.616 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:11.616 CXX test/cpp_headers/nvmf.o 00:03:11.616 CXX test/cpp_headers/nvmf_spec.o 00:03:11.616 CXX test/cpp_headers/nvmf_transport.o 00:03:11.616 CXX test/cpp_headers/opal.o 00:03:11.616 CXX test/cpp_headers/opal_spec.o 00:03:11.616 LINK pci_ut 00:03:11.616 CXX test/cpp_headers/pci_ids.o 00:03:11.616 CXX test/cpp_headers/pipe.o 00:03:11.616 CXX test/cpp_headers/queue.o 00:03:11.616 CXX test/cpp_headers/reduce.o 00:03:11.616 CXX test/cpp_headers/rpc.o 00:03:11.616 CXX test/cpp_headers/scheduler.o 00:03:11.616 CXX test/cpp_headers/scsi.o 00:03:11.616 CXX test/cpp_headers/scsi_spec.o 00:03:11.616 CXX test/cpp_headers/sock.o 00:03:11.616 CXX test/cpp_headers/stdinc.o 00:03:11.616 CXX test/cpp_headers/string.o 00:03:11.616 CXX test/cpp_headers/thread.o 00:03:11.616 CXX test/cpp_headers/trace.o 00:03:11.616 CXX test/cpp_headers/trace_parser.o 00:03:11.875 LINK nvme_compliance 00:03:11.875 CXX test/cpp_headers/tree.o 00:03:11.875 CXX test/cpp_headers/ublk.o 00:03:11.875 CXX test/cpp_headers/util.o 00:03:11.875 CXX test/cpp_headers/uuid.o 00:03:11.875 CXX test/cpp_headers/version.o 00:03:11.875 CXX test/cpp_headers/vfio_user_pci.o 00:03:11.875 CXX test/cpp_headers/vfio_user_spec.o 00:03:11.875 CXX test/cpp_headers/vhost.o 00:03:11.875 CXX test/cpp_headers/vmd.o 00:03:11.875 CXX test/cpp_headers/xor.o 00:03:11.875 CXX test/cpp_headers/zipf.o 00:03:11.875 LINK fdp 00:03:12.133 LINK memory_ut 00:03:13.067 LINK cuse 00:03:13.067 LINK iscsi_fuzz 00:03:16.349 LINK esnap 00:03:16.607 00:03:16.607 real 0m40.319s 00:03:16.607 user 7m25.695s 00:03:16.607 sys 1m42.690s 00:03:16.607 14:01:43 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:16.607 14:01:43 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.607 ************************************ 00:03:16.607 END TEST make 00:03:16.607 ************************************ 00:03:16.607 14:01:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.607 14:01:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.607 14:01:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.607 14:01:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.607 14:01:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.607 14:01:43 -- pm/common@44 -- $ pid=4052157 00:03:16.607 14:01:43 -- pm/common@50 -- $ kill -TERM 4052157 00:03:16.607 14:01:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.607 14:01:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.607 14:01:43 -- pm/common@44 -- $ pid=4052159 00:03:16.607 14:01:43 -- pm/common@50 -- $ kill -TERM 4052159 00:03:16.607 14:01:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.607 14:01:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:16.607 14:01:43 -- pm/common@44 -- $ pid=4052161 00:03:16.607 14:01:43 -- pm/common@50 -- $ kill -TERM 4052161 00:03:16.607 14:01:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.607 14:01:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:16.607 14:01:43 -- pm/common@44 -- $ pid=4052191 00:03:16.607 14:01:43 -- pm/common@50 -- $ sudo -E kill -TERM 4052191 00:03:16.607 14:01:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.866 14:01:43 -- nvmf/common.sh@7 -- # uname -s 00:03:16.866 14:01:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.866 14:01:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.866 14:01:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.866 14:01:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.866 14:01:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.866 14:01:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.866 14:01:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.866 14:01:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.866 14:01:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.866 14:01:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.866 14:01:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:03:16.866 14:01:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:03:16.866 14:01:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.866 14:01:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.866 14:01:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:16.866 14:01:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.866 14:01:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:16.866 14:01:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.866 14:01:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.866 14:01:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.866 14:01:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.866 14:01:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.866 14:01:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.866 14:01:43 -- paths/export.sh@5 -- # export PATH 00:03:16.866 14:01:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.866 14:01:43 -- nvmf/common.sh@47 -- # : 0 00:03:16.866 14:01:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:16.866 14:01:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:16.866 14:01:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.866 14:01:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.866 14:01:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.866 14:01:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:16.866 14:01:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:16.866 14:01:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:16.866 14:01:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.866 14:01:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.866 14:01:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.866 14:01:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.866 14:01:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:16.866 14:01:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.866 14:01:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:16.866 14:01:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.866 14:01:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.866 14:01:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.866 14:01:44 -- spdk/autotest.sh@48 -- # udevadm_pid=4127202 00:03:16.866 14:01:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.866 14:01:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:16.866 14:01:44 -- pm/common@17 -- # local monitor 00:03:16.866 14:01:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.866 14:01:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.866 14:01:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.866 14:01:44 -- pm/common@21 -- # date +%s 00:03:16.866 14:01:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.866 14:01:44 -- pm/common@21 -- # date +%s 00:03:16.866 14:01:44 -- pm/common@25 -- # sleep 1 00:03:16.866 14:01:44 -- pm/common@21 -- # date +%s 00:03:16.866 14:01:44 -- pm/common@21 -- # date +%s 00:03:16.866 14:01:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721822504 00:03:16.866 14:01:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721822504 00:03:16.866 14:01:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721822504 00:03:16.866 14:01:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721822504 00:03:16.866 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721822504_collect-vmstat.pm.log 00:03:16.866 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721822504_collect-cpu-load.pm.log 00:03:16.866 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721822504_collect-cpu-temp.pm.log 00:03:16.866 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721822504_collect-bmc-pm.bmc.pm.log 00:03:17.801 14:01:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.801 14:01:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.801 14:01:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:17.801 14:01:45 -- common/autotest_common.sh@10 -- # set +x 00:03:17.801 14:01:45 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.801 14:01:45 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:17.801 14:01:45 -- common/autotest_common.sh@10 -- # set +x 00:03:17.801 14:01:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:17.801 14:01:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:17.801 14:01:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:17.801 14:01:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:17.801 14:01:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:17.801 14:01:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.801 14:01:45 -- common/autotest_common.sh@1451 -- # uname 00:03:17.801 14:01:45 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:17.802 14:01:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.802 14:01:45 -- common/autotest_common.sh@1471 -- # uname 00:03:17.802 14:01:45 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:17.802 14:01:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:17.802 14:01:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:17.802 14:01:45 -- spdk/autotest.sh@72 -- # hash lcov 00:03:17.802 14:01:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:17.802 14:01:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:17.802 --rc lcov_branch_coverage=1 00:03:17.802 --rc lcov_function_coverage=1 00:03:17.802 --rc genhtml_branch_coverage=1 00:03:17.802 --rc genhtml_function_coverage=1 00:03:17.802 --rc genhtml_legend=1 00:03:17.802 --rc geninfo_all_blocks=1 00:03:17.802 ' 00:03:17.802 14:01:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:17.802 --rc lcov_branch_coverage=1 00:03:17.802 --rc lcov_function_coverage=1 00:03:17.802 --rc genhtml_branch_coverage=1 00:03:17.802 --rc genhtml_function_coverage=1 00:03:17.802 --rc genhtml_legend=1 00:03:17.802 --rc geninfo_all_blocks=1 00:03:17.802 ' 00:03:17.802 14:01:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:17.802 --rc lcov_branch_coverage=1 00:03:17.802 --rc lcov_function_coverage=1 00:03:17.802 --rc genhtml_branch_coverage=1 00:03:17.802 --rc genhtml_function_coverage=1 00:03:17.802 --rc genhtml_legend=1 00:03:17.802 --rc geninfo_all_blocks=1 00:03:17.802 --no-external' 00:03:17.802 14:01:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:17.802 --rc lcov_branch_coverage=1 00:03:17.802 --rc lcov_function_coverage=1 00:03:17.802 --rc genhtml_branch_coverage=1 00:03:17.802 --rc genhtml_function_coverage=1 00:03:17.802 --rc genhtml_legend=1 00:03:17.802 --rc geninfo_all_blocks=1 00:03:17.802 --no-external' 00:03:17.802 14:01:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:17.802 lcov: LCOV version 1.14 00:03:17.802 14:01:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:35.897 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:35.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:48.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:48.362 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:48.363 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:48.363 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:48.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:48.621 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:48.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:48.622 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:53.895 14:02:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:53.895 14:02:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:53.895 14:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:53.895 14:02:20 -- spdk/autotest.sh@91 -- # rm -f 00:03:53.895 14:02:20 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.828 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:03:54.828 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:54.828 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:54.828 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:54.828 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:54.828 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:54.828 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:54.828 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:54.828 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:54.828 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:54.828 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:54.828 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:54.828 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:54.828 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:54.828 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:54.828 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:54.828 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:55.086 14:02:22 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:55.086 14:02:22 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:55.086 14:02:22 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:55.086 14:02:22 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:55.086 14:02:22 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:55.086 14:02:22 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:55.086 14:02:22 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:55.086 14:02:22 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.086 14:02:22 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:55.086 14:02:22 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:55.086 14:02:22 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.086 14:02:22 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:55.086 14:02:22 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:55.086 14:02:22 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:55.086 14:02:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.087 No valid GPT data, bailing 00:03:55.087 14:02:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.087 14:02:22 -- scripts/common.sh@391 -- # pt= 00:03:55.087 14:02:22 -- scripts/common.sh@392 -- # return 1 00:03:55.087 14:02:22 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.087 1+0 records in 00:03:55.087 1+0 records out 00:03:55.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00190522 s, 550 MB/s 00:03:55.087 14:02:22 -- spdk/autotest.sh@118 -- # sync 00:03:55.087 14:02:22 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.087 14:02:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.087 14:02:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.987 14:02:24 -- spdk/autotest.sh@124 -- # uname -s 00:03:56.987 14:02:24 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:56.987 14:02:24 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:56.987 14:02:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.987 14:02:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.987 14:02:24 -- common/autotest_common.sh@10 -- # set +x 00:03:56.987 ************************************ 00:03:56.987 START TEST setup.sh 00:03:56.987 ************************************ 00:03:56.987 14:02:24 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:56.987 * Looking for test storage... 00:03:56.987 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:56.987 14:02:24 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:56.987 14:02:24 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:56.987 14:02:24 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:56.987 14:02:24 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.987 14:02:24 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.987 14:02:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.987 ************************************ 00:03:56.987 START TEST acl 00:03:56.987 ************************************ 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:56.987 * Looking for test storage... 00:03:56.987 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:56.987 14:02:24 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.987 14:02:24 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:56.987 14:02:24 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:56.987 14:02:24 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:56.987 14:02:24 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:56.987 14:02:24 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:56.987 14:02:24 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:56.987 14:02:24 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.987 14:02:24 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.887 14:02:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:58.887 14:02:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:58.887 14:02:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.887 14:02:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:58.887 14:02:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.887 14:02:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:00.262 Hugepages 00:04:00.262 node hugesize free / total 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 00:04:00.262 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.262 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:00.263 14:02:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:00.263 14:02:27 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:00.263 14:02:27 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.263 14:02:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:00.263 ************************************ 00:04:00.263 START TEST denied 00:04:00.263 ************************************ 00:04:00.263 14:02:27 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:00.263 14:02:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:04:00.263 14:02:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:00.263 14:02:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.263 14:02:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:04:00.263 14:02:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:01.636 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:01.636 14:02:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.637 14:02:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.197 00:04:04.197 real 0m4.124s 00:04:04.197 user 0m1.215s 00:04:04.197 sys 0m1.977s 00:04:04.197 14:02:31 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:04.197 14:02:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:04.197 ************************************ 00:04:04.197 END TEST denied 00:04:04.197 ************************************ 00:04:04.197 14:02:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:04.197 14:02:31 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.198 14:02:31 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.198 14:02:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:04.456 ************************************ 00:04:04.456 START TEST allowed 00:04:04.456 ************************************ 00:04:04.456 14:02:31 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:04.456 14:02:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:04:04.456 14:02:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:04.456 14:02:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:04:04.456 14:02:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.456 14:02:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:06.988 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.988 14:02:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:06.988 14:02:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:06.988 14:02:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:06.988 14:02:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.988 14:02:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.889 00:04:08.889 real 0m4.212s 00:04:08.889 user 0m1.143s 00:04:08.889 sys 0m1.887s 00:04:08.889 14:02:35 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.889 14:02:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:08.889 ************************************ 00:04:08.889 END TEST allowed 00:04:08.889 ************************************ 00:04:08.889 00:04:08.889 real 0m11.481s 00:04:08.889 user 0m3.582s 00:04:08.889 sys 0m5.858s 00:04:08.889 14:02:35 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.889 14:02:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.889 ************************************ 00:04:08.889 END TEST acl 00:04:08.889 ************************************ 00:04:08.889 14:02:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:08.889 14:02:35 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.889 14:02:35 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.889 14:02:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.889 ************************************ 00:04:08.889 START TEST hugepages 00:04:08.889 ************************************ 00:04:08.889 14:02:35 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:08.889 * Looking for test storage... 00:04:08.889 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 40184032 kB' 'MemAvailable: 45228056 kB' 'Buffers: 2704 kB' 'Cached: 13533624 kB' 'SwapCached: 0 kB' 'Active: 9289732 kB' 'Inactive: 4710900 kB' 'Active(anon): 8901064 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 467688 kB' 'Mapped: 224692 kB' 'Shmem: 8436760 kB' 'KReclaimable: 589108 kB' 'Slab: 956756 kB' 'SReclaimable: 589108 kB' 'SUnreclaim: 367648 kB' 'KernelStack: 12960 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562252 kB' 'Committed_AS: 10071132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198216 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.889 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.890 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.891 14:02:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:08.891 14:02:35 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.891 14:02:35 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.891 14:02:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.891 ************************************ 00:04:08.891 START TEST default_setup 00:04:08.891 ************************************ 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.891 14:02:35 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:10.267 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.267 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.267 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.267 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.267 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.267 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.267 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.267 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.267 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:11.203 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42278900 kB' 'MemAvailable: 47322884 kB' 'Buffers: 2704 kB' 'Cached: 13533708 kB' 'SwapCached: 0 kB' 'Active: 9307448 kB' 'Inactive: 4710900 kB' 'Active(anon): 8918780 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485116 kB' 'Mapped: 224644 kB' 'Shmem: 8436844 kB' 'KReclaimable: 589068 kB' 'Slab: 955984 kB' 'SReclaimable: 589068 kB' 'SUnreclaim: 366916 kB' 'KernelStack: 12912 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10090220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198328 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.204 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42282468 kB' 'MemAvailable: 47326452 kB' 'Buffers: 2704 kB' 'Cached: 13533716 kB' 'SwapCached: 0 kB' 'Active: 9307772 kB' 'Inactive: 4710900 kB' 'Active(anon): 8919104 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485484 kB' 'Mapped: 224624 kB' 'Shmem: 8436852 kB' 'KReclaimable: 589068 kB' 'Slab: 956144 kB' 'SReclaimable: 589068 kB' 'SUnreclaim: 367076 kB' 'KernelStack: 12928 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10090608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198296 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.205 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.206 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.469 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42283244 kB' 'MemAvailable: 47327228 kB' 'Buffers: 2704 kB' 'Cached: 13533732 kB' 'SwapCached: 0 kB' 'Active: 9307656 kB' 'Inactive: 4710900 kB' 'Active(anon): 8918988 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485364 kB' 'Mapped: 224624 kB' 'Shmem: 8436868 kB' 'KReclaimable: 589068 kB' 'Slab: 956052 kB' 'SReclaimable: 589068 kB' 'SUnreclaim: 366984 kB' 'KernelStack: 12912 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10090628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198280 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.470 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.471 nr_hugepages=1024 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.471 resv_hugepages=0 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.471 surplus_hugepages=0 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.471 anon_hugepages=0 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.471 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42282488 kB' 'MemAvailable: 47326472 kB' 'Buffers: 2704 kB' 'Cached: 13533732 kB' 'SwapCached: 0 kB' 'Active: 9307428 kB' 'Inactive: 4710900 kB' 'Active(anon): 8918760 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485140 kB' 'Mapped: 224624 kB' 'Shmem: 8436868 kB' 'KReclaimable: 589068 kB' 'Slab: 956052 kB' 'SReclaimable: 589068 kB' 'SUnreclaim: 366984 kB' 'KernelStack: 12928 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10090648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198280 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.472 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.473 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876820 kB' 'MemFree: 23210948 kB' 'MemUsed: 9665872 kB' 'SwapCached: 0 kB' 'Active: 5827244 kB' 'Inactive: 1037772 kB' 'Active(anon): 5542960 kB' 'Inactive(anon): 0 kB' 'Active(file): 284284 kB' 'Inactive(file): 1037772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6593316 kB' 'Mapped: 136212 kB' 'AnonPages: 275040 kB' 'Shmem: 5271260 kB' 'KernelStack: 7912 kB' 'PageTables: 5248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142972 kB' 'Slab: 309084 kB' 'SReclaimable: 142972 kB' 'SUnreclaim: 166112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.474 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.475 node0=1024 expecting 1024 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.475 00:04:11.475 real 0m2.704s 00:04:11.475 user 0m0.782s 00:04:11.475 sys 0m1.048s 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:11.475 14:02:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:11.475 ************************************ 00:04:11.475 END TEST default_setup 00:04:11.475 ************************************ 00:04:11.475 14:02:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:11.475 14:02:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:11.475 14:02:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:11.475 14:02:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.475 ************************************ 00:04:11.475 START TEST per_node_1G_alloc 00:04:11.475 ************************************ 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.475 14:02:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:12.851 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.851 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.851 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.851 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.851 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.851 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.851 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.851 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.851 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.851 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:12.851 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:12.851 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:12.851 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:12.851 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:12.851 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:12.851 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:12.851 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.851 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42286356 kB' 'MemAvailable: 47330348 kB' 'Buffers: 2704 kB' 'Cached: 13533828 kB' 'SwapCached: 0 kB' 'Active: 9308432 kB' 'Inactive: 4710900 kB' 'Active(anon): 8919764 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486008 kB' 'Mapped: 224668 kB' 'Shmem: 8436964 kB' 'KReclaimable: 589076 kB' 'Slab: 956196 kB' 'SReclaimable: 589076 kB' 'SUnreclaim: 367120 kB' 'KernelStack: 12928 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10090832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.852 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.117 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.118 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42294552 kB' 'MemAvailable: 47338544 kB' 'Buffers: 2704 kB' 'Cached: 13533828 kB' 'SwapCached: 0 kB' 'Active: 9308700 kB' 'Inactive: 4710900 kB' 'Active(anon): 8920032 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486292 kB' 'Mapped: 224712 kB' 'Shmem: 8436964 kB' 'KReclaimable: 589076 kB' 'Slab: 956204 kB' 'SReclaimable: 589076 kB' 'SUnreclaim: 367128 kB' 'KernelStack: 12960 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10090852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198344 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.119 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.120 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42295388 kB' 'MemAvailable: 47339380 kB' 'Buffers: 2704 kB' 'Cached: 13533848 kB' 'SwapCached: 0 kB' 'Active: 9308264 kB' 'Inactive: 4710900 kB' 'Active(anon): 8919596 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485868 kB' 'Mapped: 224636 kB' 'Shmem: 8436984 kB' 'KReclaimable: 589076 kB' 'Slab: 956160 kB' 'SReclaimable: 589076 kB' 'SUnreclaim: 367084 kB' 'KernelStack: 12960 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10090872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198344 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.121 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.122 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.123 nr_hugepages=1024 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.123 resv_hugepages=0 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.123 surplus_hugepages=0 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.123 anon_hugepages=0 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.123 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42295388 kB' 'MemAvailable: 47339380 kB' 'Buffers: 2704 kB' 'Cached: 13533872 kB' 'SwapCached: 0 kB' 'Active: 9309916 kB' 'Inactive: 4710900 kB' 'Active(anon): 8921248 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487488 kB' 'Mapped: 225072 kB' 'Shmem: 8437008 kB' 'KReclaimable: 589076 kB' 'Slab: 956160 kB' 'SReclaimable: 589076 kB' 'SUnreclaim: 367084 kB' 'KernelStack: 12944 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10093440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198344 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.124 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.125 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876820 kB' 'MemFree: 24265044 kB' 'MemUsed: 8611776 kB' 'SwapCached: 0 kB' 'Active: 5832848 kB' 'Inactive: 1037772 kB' 'Active(anon): 5548564 kB' 'Inactive(anon): 0 kB' 'Active(file): 284284 kB' 'Inactive(file): 1037772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6593388 kB' 'Mapped: 136216 kB' 'AnonPages: 280360 kB' 'Shmem: 5271332 kB' 'KernelStack: 7848 kB' 'PageTables: 5016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142972 kB' 'Slab: 308976 kB' 'SReclaimable: 142972 kB' 'SUnreclaim: 166004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.126 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.127 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 18030092 kB' 'MemUsed: 9634692 kB' 'SwapCached: 0 kB' 'Active: 3480776 kB' 'Inactive: 3673128 kB' 'Active(anon): 3376392 kB' 'Inactive(anon): 0 kB' 'Active(file): 104384 kB' 'Inactive(file): 3673128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6943208 kB' 'Mapped: 89324 kB' 'AnonPages: 210728 kB' 'Shmem: 3165696 kB' 'KernelStack: 5096 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 446104 kB' 'Slab: 647184 kB' 'SReclaimable: 446104 kB' 'SUnreclaim: 201080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.128 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.129 node0=512 expecting 512 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:13.129 node1=512 expecting 512 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.129 00:04:13.129 real 0m1.655s 00:04:13.129 user 0m0.673s 00:04:13.129 sys 0m0.938s 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.129 14:02:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.129 ************************************ 00:04:13.129 END TEST per_node_1G_alloc 00:04:13.129 ************************************ 00:04:13.129 14:02:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:13.129 14:02:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.129 14:02:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.129 14:02:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.129 ************************************ 00:04:13.130 START TEST even_2G_alloc 00:04:13.130 ************************************ 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.130 14:02:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:14.505 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.505 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.505 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.505 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.505 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.505 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.505 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.505 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.505 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.505 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:14.505 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:14.505 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:14.505 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:14.505 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:14.505 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:14.505 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:14.505 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.769 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42311356 kB' 'MemAvailable: 47355260 kB' 'Buffers: 2704 kB' 'Cached: 13533964 kB' 'SwapCached: 0 kB' 'Active: 9305836 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917168 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482932 kB' 'Mapped: 223736 kB' 'Shmem: 8437100 kB' 'KReclaimable: 588988 kB' 'Slab: 955984 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366996 kB' 'KernelStack: 12912 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10081932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198328 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.770 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.771 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42311876 kB' 'MemAvailable: 47355780 kB' 'Buffers: 2704 kB' 'Cached: 13533964 kB' 'SwapCached: 0 kB' 'Active: 9305884 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917216 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483256 kB' 'Mapped: 223676 kB' 'Shmem: 8437100 kB' 'KReclaimable: 588988 kB' 'Slab: 955984 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366996 kB' 'KernelStack: 12960 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10081952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198280 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.772 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.773 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42312400 kB' 'MemAvailable: 47356304 kB' 'Buffers: 2704 kB' 'Cached: 13533984 kB' 'SwapCached: 0 kB' 'Active: 9305640 kB' 'Inactive: 4710900 kB' 'Active(anon): 8916972 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483040 kB' 'Mapped: 223676 kB' 'Shmem: 8437120 kB' 'KReclaimable: 588988 kB' 'Slab: 956068 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 367080 kB' 'KernelStack: 12928 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10081972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198280 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.774 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.775 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.776 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.777 nr_hugepages=1024 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.777 resv_hugepages=0 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.777 surplus_hugepages=0 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.777 anon_hugepages=0 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.777 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42314240 kB' 'MemAvailable: 47358144 kB' 'Buffers: 2704 kB' 'Cached: 13534008 kB' 'SwapCached: 0 kB' 'Active: 9305720 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917052 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483044 kB' 'Mapped: 223676 kB' 'Shmem: 8437144 kB' 'KReclaimable: 588988 kB' 'Slab: 956068 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 367080 kB' 'KernelStack: 12928 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10081992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198296 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.778 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.779 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876820 kB' 'MemFree: 24290416 kB' 'MemUsed: 8586404 kB' 'SwapCached: 0 kB' 'Active: 5826416 kB' 'Inactive: 1037772 kB' 'Active(anon): 5542132 kB' 'Inactive(anon): 0 kB' 'Active(file): 284284 kB' 'Inactive(file): 1037772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6593516 kB' 'Mapped: 135680 kB' 'AnonPages: 273772 kB' 'Shmem: 5271460 kB' 'KernelStack: 7832 kB' 'PageTables: 4968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142948 kB' 'Slab: 309144 kB' 'SReclaimable: 142948 kB' 'SUnreclaim: 166196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.780 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.781 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.782 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 18022816 kB' 'MemUsed: 9641968 kB' 'SwapCached: 0 kB' 'Active: 3479292 kB' 'Inactive: 3673128 kB' 'Active(anon): 3374908 kB' 'Inactive(anon): 0 kB' 'Active(file): 104384 kB' 'Inactive(file): 3673128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6943216 kB' 'Mapped: 87996 kB' 'AnonPages: 209228 kB' 'Shmem: 3165704 kB' 'KernelStack: 5080 kB' 'PageTables: 3056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 446040 kB' 'Slab: 646924 kB' 'SReclaimable: 446040 kB' 'SUnreclaim: 200884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.783 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.784 node0=512 expecting 512 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:14.784 node1=512 expecting 512 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:14.784 00:04:14.784 real 0m1.663s 00:04:14.784 user 0m0.703s 00:04:14.784 sys 0m0.918s 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.784 14:02:42 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.784 ************************************ 00:04:14.784 END TEST even_2G_alloc 00:04:14.784 ************************************ 00:04:14.784 14:02:42 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:14.784 14:02:42 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:14.784 14:02:42 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.784 14:02:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.784 ************************************ 00:04:14.784 START TEST odd_alloc 00:04:14.784 ************************************ 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:14.784 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:14.785 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:14.785 14:02:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:14.785 14:02:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.785 14:02:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:16.163 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.163 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.163 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.163 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.163 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.163 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.163 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.163 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.163 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.163 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:16.163 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:16.163 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:16.163 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:16.163 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:16.163 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:16.163 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:16.163 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.426 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42289940 kB' 'MemAvailable: 47333844 kB' 'Buffers: 2704 kB' 'Cached: 13534096 kB' 'SwapCached: 0 kB' 'Active: 9305588 kB' 'Inactive: 4710900 kB' 'Active(anon): 8916920 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482804 kB' 'Mapped: 223684 kB' 'Shmem: 8437232 kB' 'KReclaimable: 588988 kB' 'Slab: 955928 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366940 kB' 'KernelStack: 12912 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609804 kB' 'Committed_AS: 10082192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198424 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.427 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.428 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.429 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42289688 kB' 'MemAvailable: 47333592 kB' 'Buffers: 2704 kB' 'Cached: 13534100 kB' 'SwapCached: 0 kB' 'Active: 9305884 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917216 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483132 kB' 'Mapped: 223684 kB' 'Shmem: 8437236 kB' 'KReclaimable: 588988 kB' 'Slab: 955920 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366932 kB' 'KernelStack: 12896 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609804 kB' 'Committed_AS: 10082208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.430 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.431 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.432 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42292896 kB' 'MemAvailable: 47336800 kB' 'Buffers: 2704 kB' 'Cached: 13534132 kB' 'SwapCached: 0 kB' 'Active: 9305540 kB' 'Inactive: 4710900 kB' 'Active(anon): 8916872 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482724 kB' 'Mapped: 223684 kB' 'Shmem: 8437268 kB' 'KReclaimable: 588988 kB' 'Slab: 955964 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12928 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609804 kB' 'Committed_AS: 10082232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.433 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.434 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.435 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:16.436 nr_hugepages=1025 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.436 resv_hugepages=0 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.436 surplus_hugepages=0 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.436 anon_hugepages=0 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.436 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42292896 kB' 'MemAvailable: 47336800 kB' 'Buffers: 2704 kB' 'Cached: 13534132 kB' 'SwapCached: 0 kB' 'Active: 9305724 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917056 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482908 kB' 'Mapped: 223684 kB' 'Shmem: 8437268 kB' 'KReclaimable: 588988 kB' 'Slab: 955964 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12928 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609804 kB' 'Committed_AS: 10082252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.437 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.438 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876820 kB' 'MemFree: 24275536 kB' 'MemUsed: 8601284 kB' 'SwapCached: 0 kB' 'Active: 5825868 kB' 'Inactive: 1037772 kB' 'Active(anon): 5541584 kB' 'Inactive(anon): 0 kB' 'Active(file): 284284 kB' 'Inactive(file): 1037772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6593576 kB' 'Mapped: 135688 kB' 'AnonPages: 273172 kB' 'Shmem: 5271520 kB' 'KernelStack: 7848 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142948 kB' 'Slab: 309180 kB' 'SReclaimable: 142948 kB' 'SUnreclaim: 166232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.439 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.440 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 18017940 kB' 'MemUsed: 9646844 kB' 'SwapCached: 0 kB' 'Active: 3480292 kB' 'Inactive: 3673128 kB' 'Active(anon): 3375908 kB' 'Inactive(anon): 0 kB' 'Active(file): 104384 kB' 'Inactive(file): 3673128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6943304 kB' 'Mapped: 87996 kB' 'AnonPages: 210220 kB' 'Shmem: 3165792 kB' 'KernelStack: 5096 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 446040 kB' 'Slab: 646784 kB' 'SReclaimable: 446040 kB' 'SUnreclaim: 200744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.441 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:16.442 node0=512 expecting 513 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:16.442 node1=513 expecting 512 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:16.442 00:04:16.442 real 0m1.584s 00:04:16.442 user 0m0.663s 00:04:16.442 sys 0m0.875s 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.442 14:02:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.442 ************************************ 00:04:16.442 END TEST odd_alloc 00:04:16.442 ************************************ 00:04:16.442 14:02:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:16.442 14:02:43 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.442 14:02:43 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.442 14:02:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.442 ************************************ 00:04:16.442 START TEST custom_alloc 00:04:16.442 ************************************ 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.442 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.443 14:02:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:17.818 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.818 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.818 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.818 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.818 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.818 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.818 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.818 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.818 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:17.818 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:17.818 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:17.818 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:17.818 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:17.818 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:17.818 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:17.818 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:17.818 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 41235248 kB' 'MemAvailable: 46279152 kB' 'Buffers: 2704 kB' 'Cached: 13534232 kB' 'SwapCached: 0 kB' 'Active: 9308140 kB' 'Inactive: 4710900 kB' 'Active(anon): 8919472 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485316 kB' 'Mapped: 223704 kB' 'Shmem: 8437368 kB' 'KReclaimable: 588988 kB' 'Slab: 955828 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366840 kB' 'KernelStack: 13328 kB' 'PageTables: 9556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086540 kB' 'Committed_AS: 10084820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198456 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.083 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.084 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 41235684 kB' 'MemAvailable: 46279588 kB' 'Buffers: 2704 kB' 'Cached: 13534236 kB' 'SwapCached: 0 kB' 'Active: 9307544 kB' 'Inactive: 4710900 kB' 'Active(anon): 8918876 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484716 kB' 'Mapped: 223780 kB' 'Shmem: 8437372 kB' 'KReclaimable: 588988 kB' 'Slab: 955860 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366872 kB' 'KernelStack: 13312 kB' 'PageTables: 9476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086540 kB' 'Committed_AS: 10084836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198600 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.085 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.086 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 41236692 kB' 'MemAvailable: 46280596 kB' 'Buffers: 2704 kB' 'Cached: 13534252 kB' 'SwapCached: 0 kB' 'Active: 9307124 kB' 'Inactive: 4710900 kB' 'Active(anon): 8918456 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484308 kB' 'Mapped: 223704 kB' 'Shmem: 8437388 kB' 'KReclaimable: 588988 kB' 'Slab: 955956 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366968 kB' 'KernelStack: 13184 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086540 kB' 'Committed_AS: 10082500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198312 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.087 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.088 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:18.089 nr_hugepages=1536 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.089 resv_hugepages=0 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.089 surplus_hugepages=0 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.089 anon_hugepages=0 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 41236712 kB' 'MemAvailable: 46280616 kB' 'Buffers: 2704 kB' 'Cached: 13534272 kB' 'SwapCached: 0 kB' 'Active: 9306252 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917584 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483400 kB' 'Mapped: 223708 kB' 'Shmem: 8437408 kB' 'KReclaimable: 588988 kB' 'Slab: 955956 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366968 kB' 'KernelStack: 13024 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086540 kB' 'Committed_AS: 10082520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198296 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.089 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.090 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876820 kB' 'MemFree: 24261136 kB' 'MemUsed: 8615684 kB' 'SwapCached: 0 kB' 'Active: 5826544 kB' 'Inactive: 1037772 kB' 'Active(anon): 5542260 kB' 'Inactive(anon): 0 kB' 'Active(file): 284284 kB' 'Inactive(file): 1037772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6593632 kB' 'Mapped: 135704 kB' 'AnonPages: 273816 kB' 'Shmem: 5271576 kB' 'KernelStack: 7832 kB' 'PageTables: 4876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142948 kB' 'Slab: 309072 kB' 'SReclaimable: 142948 kB' 'SUnreclaim: 166124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.091 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664784 kB' 'MemFree: 16975324 kB' 'MemUsed: 10689460 kB' 'SwapCached: 0 kB' 'Active: 3479724 kB' 'Inactive: 3673128 kB' 'Active(anon): 3375340 kB' 'Inactive(anon): 0 kB' 'Active(file): 104384 kB' 'Inactive(file): 3673128 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6943368 kB' 'Mapped: 88004 kB' 'AnonPages: 209540 kB' 'Shmem: 3165856 kB' 'KernelStack: 5176 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 446040 kB' 'Slab: 646884 kB' 'SReclaimable: 446040 kB' 'SUnreclaim: 200844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.092 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.093 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.094 node0=512 expecting 512 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:18.094 node1=1024 expecting 1024 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:18.094 00:04:18.094 real 0m1.621s 00:04:18.094 user 0m0.644s 00:04:18.094 sys 0m0.934s 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.094 14:02:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.094 ************************************ 00:04:18.094 END TEST custom_alloc 00:04:18.094 ************************************ 00:04:18.094 14:02:45 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:18.094 14:02:45 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.094 14:02:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.094 14:02:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.094 ************************************ 00:04:18.094 START TEST no_shrink_alloc 00:04:18.094 ************************************ 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.094 14:02:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:19.464 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.464 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.464 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.464 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.464 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.464 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.464 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.464 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.464 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.464 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.464 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.464 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.464 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.465 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.465 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.465 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.465 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42253232 kB' 'MemAvailable: 47297136 kB' 'Buffers: 2704 kB' 'Cached: 13534356 kB' 'SwapCached: 0 kB' 'Active: 9306648 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917980 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483644 kB' 'Mapped: 223720 kB' 'Shmem: 8437492 kB' 'KReclaimable: 588988 kB' 'Slab: 955996 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 367008 kB' 'KernelStack: 12944 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10082748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198424 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.737 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.738 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42253060 kB' 'MemAvailable: 47296964 kB' 'Buffers: 2704 kB' 'Cached: 13534356 kB' 'SwapCached: 0 kB' 'Active: 9306260 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917592 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483248 kB' 'Mapped: 223716 kB' 'Shmem: 8437492 kB' 'KReclaimable: 588988 kB' 'Slab: 955964 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12944 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10082764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.739 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.740 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42253060 kB' 'MemAvailable: 47296964 kB' 'Buffers: 2704 kB' 'Cached: 13534380 kB' 'SwapCached: 0 kB' 'Active: 9306384 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917716 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483396 kB' 'Mapped: 223716 kB' 'Shmem: 8437516 kB' 'KReclaimable: 588988 kB' 'Slab: 956020 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 367032 kB' 'KernelStack: 12960 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10082788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.741 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.742 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.743 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.744 nr_hugepages=1024 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.744 resv_hugepages=0 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.744 surplus_hugepages=0 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.744 anon_hugepages=0 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42253060 kB' 'MemAvailable: 47296964 kB' 'Buffers: 2704 kB' 'Cached: 13534396 kB' 'SwapCached: 0 kB' 'Active: 9306164 kB' 'Inactive: 4710900 kB' 'Active(anon): 8917496 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483152 kB' 'Mapped: 223716 kB' 'Shmem: 8437532 kB' 'KReclaimable: 588988 kB' 'Slab: 956020 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 367032 kB' 'KernelStack: 12944 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10082808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198376 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.744 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.745 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876820 kB' 'MemFree: 23224420 kB' 'MemUsed: 9652400 kB' 'SwapCached: 0 kB' 'Active: 5826624 kB' 'Inactive: 1037772 kB' 'Active(anon): 5542340 kB' 'Inactive(anon): 0 kB' 'Active(file): 284284 kB' 'Inactive(file): 1037772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6593780 kB' 'Mapped: 135716 kB' 'AnonPages: 273780 kB' 'Shmem: 5271724 kB' 'KernelStack: 7864 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142948 kB' 'Slab: 309056 kB' 'SReclaimable: 142948 kB' 'SUnreclaim: 166108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.746 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.747 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.748 node0=1024 expecting 1024 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.748 14:02:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:21.138 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.138 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.138 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.138 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.138 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.138 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.138 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.138 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.138 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.138 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.138 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.138 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.138 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.138 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.138 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.138 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.138 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.138 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42244672 kB' 'MemAvailable: 47288576 kB' 'Buffers: 2704 kB' 'Cached: 13534472 kB' 'SwapCached: 0 kB' 'Active: 9310788 kB' 'Inactive: 4710900 kB' 'Active(anon): 8922120 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487220 kB' 'Mapped: 224184 kB' 'Shmem: 8437608 kB' 'KReclaimable: 588988 kB' 'Slab: 955952 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366964 kB' 'KernelStack: 12960 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10087512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198392 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.138 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.139 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42244888 kB' 'MemAvailable: 47288792 kB' 'Buffers: 2704 kB' 'Cached: 13534472 kB' 'SwapCached: 0 kB' 'Active: 9312704 kB' 'Inactive: 4710900 kB' 'Active(anon): 8924036 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489608 kB' 'Mapped: 224632 kB' 'Shmem: 8437608 kB' 'KReclaimable: 588988 kB' 'Slab: 955936 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 366948 kB' 'KernelStack: 13008 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10089260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198380 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.140 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.141 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42243440 kB' 'MemAvailable: 47287344 kB' 'Buffers: 2704 kB' 'Cached: 13534492 kB' 'SwapCached: 0 kB' 'Active: 9308296 kB' 'Inactive: 4710900 kB' 'Active(anon): 8919628 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485140 kB' 'Mapped: 224572 kB' 'Shmem: 8437628 kB' 'KReclaimable: 588988 kB' 'Slab: 956024 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 367036 kB' 'KernelStack: 12944 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10086100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198360 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.142 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.143 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.144 nr_hugepages=1024 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.144 resv_hugepages=0 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.144 surplus_hugepages=0 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.144 anon_hugepages=0 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.144 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541604 kB' 'MemFree: 42239660 kB' 'MemAvailable: 47283564 kB' 'Buffers: 2704 kB' 'Cached: 13534516 kB' 'SwapCached: 0 kB' 'Active: 9312068 kB' 'Inactive: 4710900 kB' 'Active(anon): 8923400 kB' 'Inactive(anon): 0 kB' 'Active(file): 388668 kB' 'Inactive(file): 4710900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488900 kB' 'Mapped: 224492 kB' 'Shmem: 8437652 kB' 'KReclaimable: 588988 kB' 'Slab: 956024 kB' 'SReclaimable: 588988 kB' 'SUnreclaim: 367036 kB' 'KernelStack: 12944 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610828 kB' 'Committed_AS: 10089304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198364 kB' 'VmallocChunk: 0 kB' 'Percpu: 64128 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1846756 kB' 'DirectMap2M: 21141504 kB' 'DirectMap1G: 46137344 kB' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.145 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.146 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876820 kB' 'MemFree: 23227340 kB' 'MemUsed: 9649480 kB' 'SwapCached: 0 kB' 'Active: 5825952 kB' 'Inactive: 1037772 kB' 'Active(anon): 5541668 kB' 'Inactive(anon): 0 kB' 'Active(file): 284284 kB' 'Inactive(file): 1037772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6593864 kB' 'Mapped: 135720 kB' 'AnonPages: 272976 kB' 'Shmem: 5271808 kB' 'KernelStack: 7784 kB' 'PageTables: 4732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 142948 kB' 'Slab: 309104 kB' 'SReclaimable: 142948 kB' 'SUnreclaim: 166156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.147 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.148 node0=1024 expecting 1024 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.148 00:04:21.148 real 0m3.057s 00:04:21.148 user 0m1.258s 00:04:21.148 sys 0m1.704s 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.148 14:02:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.148 ************************************ 00:04:21.148 END TEST no_shrink_alloc 00:04:21.148 ************************************ 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.148 14:02:48 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.148 00:04:21.148 real 0m12.672s 00:04:21.148 user 0m4.882s 00:04:21.148 sys 0m6.668s 00:04:21.148 14:02:48 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.148 14:02:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.148 ************************************ 00:04:21.148 END TEST hugepages 00:04:21.148 ************************************ 00:04:21.406 14:02:48 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:21.406 14:02:48 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.406 14:02:48 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.406 14:02:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.406 ************************************ 00:04:21.406 START TEST driver 00:04:21.406 ************************************ 00:04:21.406 14:02:48 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:21.406 * Looking for test storage... 00:04:21.406 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:21.406 14:02:48 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:21.406 14:02:48 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.406 14:02:48 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.931 14:02:51 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:23.931 14:02:51 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.931 14:02:51 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.931 14:02:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.189 ************************************ 00:04:24.189 START TEST guess_driver 00:04:24.189 ************************************ 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 187 > 0 )) 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:24.189 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.189 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.189 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:24.189 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:24.189 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:24.189 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:24.189 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:24.189 Looking for driver=vfio-pci 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.189 14:02:51 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:25.562 14:02:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.495 14:02:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:26.495 14:02:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:26.495 14:02:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.752 14:02:53 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:26.752 14:02:53 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:26.752 14:02:53 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.752 14:02:53 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.279 00:04:29.279 real 0m5.249s 00:04:29.279 user 0m1.292s 00:04:29.279 sys 0m2.065s 00:04:29.279 14:02:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.279 14:02:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.279 ************************************ 00:04:29.279 END TEST guess_driver 00:04:29.279 ************************************ 00:04:29.279 00:04:29.279 real 0m8.055s 00:04:29.279 user 0m1.977s 00:04:29.279 sys 0m3.222s 00:04:29.279 14:02:56 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.279 14:02:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.279 ************************************ 00:04:29.279 END TEST driver 00:04:29.279 ************************************ 00:04:29.279 14:02:56 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:29.279 14:02:56 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.279 14:02:56 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.279 14:02:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.279 ************************************ 00:04:29.279 START TEST devices 00:04:29.279 ************************************ 00:04:29.279 14:02:56 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:29.537 * Looking for test storage... 00:04:29.537 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:29.537 14:02:56 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:29.537 14:02:56 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:29.537 14:02:56 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.537 14:02:56 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:31.438 14:02:58 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.438 14:02:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:31.439 14:02:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:31.439 14:02:58 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:31.439 No valid GPT data, bailing 00:04:31.439 14:02:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:31.439 14:02:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.439 14:02:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:31.439 14:02:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:31.439 14:02:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:31.439 14:02:58 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:31.439 14:02:58 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:31.439 14:02:58 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.439 14:02:58 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.439 14:02:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.439 ************************************ 00:04:31.439 START TEST nvme_mount 00:04:31.439 ************************************ 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:31.439 14:02:58 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:32.375 Creating new GPT entries in memory. 00:04:32.375 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.375 other utilities. 00:04:32.375 14:02:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.375 14:02:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.375 14:02:59 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.375 14:02:59 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.375 14:02:59 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:33.309 Creating new GPT entries in memory. 00:04:33.309 The operation has completed successfully. 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4149700 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.309 14:03:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.683 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:34.684 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.684 14:03:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.942 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:34.942 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:34.942 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.942 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.942 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:34.942 14:03:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:34.942 14:03:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.942 14:03:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:34.942 14:03:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:34.942 14:03:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.200 14:03:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.579 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.580 14:03:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:37.952 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.211 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.211 00:04:38.211 real 0m6.964s 00:04:38.211 user 0m1.767s 00:04:38.211 sys 0m2.764s 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.211 14:03:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:38.211 ************************************ 00:04:38.211 END TEST nvme_mount 00:04:38.211 ************************************ 00:04:38.211 14:03:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:38.211 14:03:05 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.211 14:03:05 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.211 14:03:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:38.211 ************************************ 00:04:38.211 START TEST dm_mount 00:04:38.211 ************************************ 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.211 14:03:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.183 Creating new GPT entries in memory. 00:04:39.183 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.183 other utilities. 00:04:39.183 14:03:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.183 14:03:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.183 14:03:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.183 14:03:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.183 14:03:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.556 Creating new GPT entries in memory. 00:04:40.556 The operation has completed successfully. 00:04:40.556 14:03:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.556 14:03:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.556 14:03:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.556 14:03:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.556 14:03:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.491 The operation has completed successfully. 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4152481 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.491 14:03:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:42.426 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.684 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.684 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:42.684 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.684 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.684 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.684 14:03:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.684 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:42.685 14:03:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.685 14:03:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.685 14:03:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:44.059 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:44.317 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:44.317 00:04:44.317 real 0m6.065s 00:04:44.317 user 0m1.114s 00:04:44.317 sys 0m1.781s 00:04:44.317 14:03:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.318 14:03:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:44.318 ************************************ 00:04:44.318 END TEST dm_mount 00:04:44.318 ************************************ 00:04:44.318 14:03:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:44.318 14:03:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:44.318 14:03:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.318 14:03:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.318 14:03:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:44.318 14:03:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.318 14:03:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.576 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:44.576 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:44.576 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:44.576 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:44.576 14:03:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:44.576 14:03:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:44.576 14:03:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:44.576 14:03:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.576 14:03:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:44.576 14:03:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.576 14:03:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:44.576 00:04:44.576 real 0m15.177s 00:04:44.576 user 0m3.681s 00:04:44.576 sys 0m5.650s 00:04:44.576 14:03:11 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.576 14:03:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:44.576 ************************************ 00:04:44.576 END TEST devices 00:04:44.576 ************************************ 00:04:44.576 00:04:44.576 real 0m47.622s 00:04:44.576 user 0m14.220s 00:04:44.576 sys 0m21.551s 00:04:44.576 14:03:11 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.576 14:03:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.576 ************************************ 00:04:44.576 END TEST setup.sh 00:04:44.576 ************************************ 00:04:44.576 14:03:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:45.951 Hugepages 00:04:45.951 node hugesize free / total 00:04:45.951 node0 1048576kB 0 / 0 00:04:45.951 node0 2048kB 2048 / 2048 00:04:45.951 node1 1048576kB 0 / 0 00:04:45.951 node1 2048kB 0 / 0 00:04:45.951 00:04:45.951 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.951 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:45.951 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:45.951 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:45.951 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:45.951 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:45.951 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:45.951 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:45.951 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:45.951 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:45.951 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:45.951 14:03:13 -- spdk/autotest.sh@130 -- # uname -s 00:04:45.951 14:03:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:45.951 14:03:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:45.951 14:03:13 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:47.324 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:47.324 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:47.324 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:47.324 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:47.324 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:47.582 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:47.582 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:47.582 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:47.582 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:48.516 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.516 14:03:15 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:49.450 14:03:16 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:49.450 14:03:16 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:49.450 14:03:16 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:49.450 14:03:16 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:49.450 14:03:16 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:49.450 14:03:16 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:49.450 14:03:16 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.450 14:03:16 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.450 14:03:16 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:49.708 14:03:16 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:49.708 14:03:16 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:04:49.708 14:03:16 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.083 Waiting for block devices as requested 00:04:51.083 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:04:51.083 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:51.083 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:51.083 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:51.342 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:51.343 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:51.343 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:51.343 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:51.601 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:51.601 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:51.601 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:51.601 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:51.860 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:51.860 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:51.860 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:52.118 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:52.118 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:52.118 14:03:19 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:52.118 14:03:19 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:04:52.118 14:03:19 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:52.118 14:03:19 -- common/autotest_common.sh@1498 -- # grep 0000:84:00.0/nvme/nvme 00:04:52.118 14:03:19 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:04:52.118 14:03:19 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:04:52.118 14:03:19 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:04:52.118 14:03:19 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:52.118 14:03:19 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:52.118 14:03:19 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:52.118 14:03:19 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:52.118 14:03:19 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:52.118 14:03:19 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:52.375 14:03:19 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:04:52.375 14:03:19 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:52.375 14:03:19 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:52.375 14:03:19 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:52.375 14:03:19 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:52.375 14:03:19 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:52.375 14:03:19 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:52.375 14:03:19 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:52.375 14:03:19 -- common/autotest_common.sh@1553 -- # continue 00:04:52.375 14:03:19 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:52.375 14:03:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.375 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:52.375 14:03:19 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:52.375 14:03:19 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:52.375 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:52.375 14:03:19 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:53.750 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:53.750 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:53.750 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:53.750 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:53.750 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:53.750 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:53.750 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:53.750 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:53.750 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:54.684 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.943 14:03:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:54.943 14:03:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.943 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:04:54.943 14:03:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:54.943 14:03:22 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:54.943 14:03:22 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.943 14:03:22 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:54.943 14:03:22 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:54.943 14:03:22 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:54.943 14:03:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:54.943 14:03:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:54.943 14:03:22 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.943 14:03:22 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:54.943 14:03:22 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:54.943 14:03:22 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:54.943 14:03:22 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:04:54.943 14:03:22 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:54.943 14:03:22 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:04:54.943 14:03:22 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:54.943 14:03:22 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:54.943 14:03:22 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:54.943 14:03:22 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:84:00.0 00:04:54.943 14:03:22 -- common/autotest_common.sh@1588 -- # [[ -z 0000:84:00.0 ]] 00:04:54.943 14:03:22 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=4158849 00:04:54.943 14:03:22 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.943 14:03:22 -- common/autotest_common.sh@1594 -- # waitforlisten 4158849 00:04:54.943 14:03:22 -- common/autotest_common.sh@827 -- # '[' -z 4158849 ']' 00:04:54.943 14:03:22 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.943 14:03:22 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.943 14:03:22 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.943 14:03:22 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.943 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:04:54.943 [2024-07-24 14:03:22.213853] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:04:54.943 [2024-07-24 14:03:22.213962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158849 ] 00:04:54.943 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.943 [2024-07-24 14:03:22.280620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.201 [2024-07-24 14:03:22.368158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.488 14:03:22 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.488 14:03:22 -- common/autotest_common.sh@860 -- # return 0 00:04:55.488 14:03:22 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:55.488 14:03:22 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:55.488 14:03:22 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:04:58.768 nvme0n1 00:04:58.768 14:03:25 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:58.768 [2024-07-24 14:03:25.940945] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:58.768 [2024-07-24 14:03:25.940993] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:58.768 request: 00:04:58.768 { 00:04:58.768 "nvme_ctrlr_name": "nvme0", 00:04:58.768 "password": "test", 00:04:58.768 "method": "bdev_nvme_opal_revert", 00:04:58.768 "req_id": 1 00:04:58.768 } 00:04:58.768 Got JSON-RPC error response 00:04:58.768 response: 00:04:58.768 { 00:04:58.768 "code": -32603, 00:04:58.768 "message": "Internal error" 00:04:58.768 } 00:04:58.768 14:03:25 -- common/autotest_common.sh@1600 -- # true 00:04:58.768 14:03:25 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:58.768 14:03:25 -- common/autotest_common.sh@1604 -- # killprocess 4158849 00:04:58.768 14:03:25 -- common/autotest_common.sh@946 -- # '[' -z 4158849 ']' 00:04:58.768 14:03:25 -- common/autotest_common.sh@950 -- # kill -0 4158849 00:04:58.768 14:03:25 -- common/autotest_common.sh@951 -- # uname 00:04:58.768 14:03:25 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:58.768 14:03:25 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4158849 00:04:58.768 14:03:25 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:58.768 14:03:25 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:58.768 14:03:25 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4158849' 00:04:58.768 killing process with pid 4158849 00:04:58.768 14:03:25 -- common/autotest_common.sh@965 -- # kill 4158849 00:04:58.768 14:03:25 -- common/autotest_common.sh@970 -- # wait 4158849 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.768 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.769 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:58.770 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:00.665 14:03:27 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:00.665 14:03:27 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:00.665 14:03:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:00.665 14:03:27 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:00.665 14:03:27 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:00.665 14:03:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:00.665 14:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 14:03:27 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:00.665 14:03:27 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:00.665 14:03:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.665 14:03:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.665 14:03:27 -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 ************************************ 00:05:00.665 START TEST env 00:05:00.665 ************************************ 00:05:00.665 14:03:27 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:00.665 * Looking for test storage... 00:05:00.665 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:00.665 14:03:27 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:00.665 14:03:27 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.665 14:03:27 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.665 14:03:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 ************************************ 00:05:00.665 START TEST env_memory 00:05:00.665 ************************************ 00:05:00.665 14:03:27 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:00.665 00:05:00.665 00:05:00.665 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.665 http://cunit.sourceforge.net/ 00:05:00.665 00:05:00.665 00:05:00.666 Suite: memory 00:05:00.666 Test: alloc and free memory map ...[2024-07-24 14:03:27.885334] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:00.666 passed 00:05:00.666 Test: mem map translation ...[2024-07-24 14:03:27.905211] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:00.666 [2024-07-24 14:03:27.905232] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:00.666 [2024-07-24 14:03:27.905283] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:00.666 [2024-07-24 14:03:27.905294] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:00.666 passed 00:05:00.666 Test: mem map registration ...[2024-07-24 14:03:27.946063] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:00.666 [2024-07-24 14:03:27.946082] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:00.666 passed 00:05:00.666 Test: mem map adjacent registrations ...passed 00:05:00.666 00:05:00.666 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.666 suites 1 1 n/a 0 0 00:05:00.666 tests 4 4 4 0 0 00:05:00.666 asserts 152 152 152 0 n/a 00:05:00.666 00:05:00.666 Elapsed time = 0.141 seconds 00:05:00.666 00:05:00.666 real 0m0.149s 00:05:00.666 user 0m0.140s 00:05:00.666 sys 0m0.009s 00:05:00.666 14:03:27 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.666 14:03:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:00.666 ************************************ 00:05:00.666 END TEST env_memory 00:05:00.666 ************************************ 00:05:00.666 14:03:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:00.666 14:03:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.666 14:03:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.666 14:03:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.924 ************************************ 00:05:00.924 START TEST env_vtophys 00:05:00.924 ************************************ 00:05:00.924 14:03:28 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:00.924 EAL: lib.eal log level changed from notice to debug 00:05:00.924 EAL: Detected lcore 0 as core 0 on socket 0 00:05:00.924 EAL: Detected lcore 1 as core 1 on socket 0 00:05:00.924 EAL: Detected lcore 2 as core 2 on socket 0 00:05:00.924 EAL: Detected lcore 3 as core 3 on socket 0 00:05:00.924 EAL: Detected lcore 4 as core 4 on socket 0 00:05:00.924 EAL: Detected lcore 5 as core 5 on socket 0 00:05:00.924 EAL: Detected lcore 6 as core 8 on socket 0 00:05:00.924 EAL: Detected lcore 7 as core 9 on socket 0 00:05:00.924 EAL: Detected lcore 8 as core 10 on socket 0 00:05:00.924 EAL: Detected lcore 9 as core 11 on socket 0 00:05:00.924 EAL: Detected lcore 10 as core 12 on socket 0 00:05:00.924 EAL: Detected lcore 11 as core 13 on socket 0 00:05:00.924 EAL: Detected lcore 12 as core 0 on socket 1 00:05:00.924 EAL: Detected lcore 13 as core 1 on socket 1 00:05:00.924 EAL: Detected lcore 14 as core 2 on socket 1 00:05:00.924 EAL: Detected lcore 15 as core 3 on socket 1 00:05:00.924 EAL: Detected lcore 16 as core 4 on socket 1 00:05:00.924 EAL: Detected lcore 17 as core 5 on socket 1 00:05:00.924 EAL: Detected lcore 18 as core 8 on socket 1 00:05:00.924 EAL: Detected lcore 19 as core 9 on socket 1 00:05:00.924 EAL: Detected lcore 20 as core 10 on socket 1 00:05:00.924 EAL: Detected lcore 21 as core 11 on socket 1 00:05:00.924 EAL: Detected lcore 22 as core 12 on socket 1 00:05:00.924 EAL: Detected lcore 23 as core 13 on socket 1 00:05:00.924 EAL: Detected lcore 24 as core 0 on socket 0 00:05:00.924 EAL: Detected lcore 25 as core 1 on socket 0 00:05:00.924 EAL: Detected lcore 26 as core 2 on socket 0 00:05:00.924 EAL: Detected lcore 27 as core 3 on socket 0 00:05:00.924 EAL: Detected lcore 28 as core 4 on socket 0 00:05:00.924 EAL: Detected lcore 29 as core 5 on socket 0 00:05:00.924 EAL: Detected lcore 30 as core 8 on socket 0 00:05:00.924 EAL: Detected lcore 31 as core 9 on socket 0 00:05:00.924 EAL: Detected lcore 32 as core 10 on socket 0 00:05:00.924 EAL: Detected lcore 33 as core 11 on socket 0 00:05:00.924 EAL: Detected lcore 34 as core 12 on socket 0 00:05:00.924 EAL: Detected lcore 35 as core 13 on socket 0 00:05:00.924 EAL: Detected lcore 36 as core 0 on socket 1 00:05:00.924 EAL: Detected lcore 37 as core 1 on socket 1 00:05:00.924 EAL: Detected lcore 38 as core 2 on socket 1 00:05:00.924 EAL: Detected lcore 39 as core 3 on socket 1 00:05:00.924 EAL: Detected lcore 40 as core 4 on socket 1 00:05:00.924 EAL: Detected lcore 41 as core 5 on socket 1 00:05:00.924 EAL: Detected lcore 42 as core 8 on socket 1 00:05:00.924 EAL: Detected lcore 43 as core 9 on socket 1 00:05:00.924 EAL: Detected lcore 44 as core 10 on socket 1 00:05:00.924 EAL: Detected lcore 45 as core 11 on socket 1 00:05:00.924 EAL: Detected lcore 46 as core 12 on socket 1 00:05:00.924 EAL: Detected lcore 47 as core 13 on socket 1 00:05:00.924 EAL: Maximum logical cores by configuration: 128 00:05:00.924 EAL: Detected CPU lcores: 48 00:05:00.924 EAL: Detected NUMA nodes: 2 00:05:00.924 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:00.924 EAL: Detected shared linkage of DPDK 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:00.924 EAL: Registered [vdev] bus. 00:05:00.924 EAL: bus.vdev log level changed from disabled to notice 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:00.924 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:00.924 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:00.924 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:00.924 EAL: No shared files mode enabled, IPC will be disabled 00:05:00.924 EAL: No shared files mode enabled, IPC is disabled 00:05:00.924 EAL: Bus pci wants IOVA as 'DC' 00:05:00.924 EAL: Bus vdev wants IOVA as 'DC' 00:05:00.924 EAL: Buses did not request a specific IOVA mode. 00:05:00.924 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:00.924 EAL: Selected IOVA mode 'VA' 00:05:00.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.924 EAL: Probing VFIO support... 00:05:00.924 EAL: IOMMU type 1 (Type 1) is supported 00:05:00.924 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:00.924 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:00.924 EAL: VFIO support initialized 00:05:00.924 EAL: Ask a virtual area of 0x2e000 bytes 00:05:00.924 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:00.924 EAL: Setting up physically contiguous memory... 00:05:00.924 EAL: Setting maximum number of open files to 524288 00:05:00.924 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:00.924 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:00.924 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:00.924 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.924 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:00.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:00.924 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.924 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:00.924 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:00.924 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.924 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:00.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:00.924 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.924 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:00.924 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:00.924 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.924 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:00.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:00.924 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.924 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:00.924 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:00.924 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.924 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:00.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:00.924 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.924 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:00.924 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:00.924 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:00.924 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.924 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:00.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:00.925 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.925 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:00.925 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:00.925 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.925 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:00.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:00.925 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.925 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:00.925 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:00.925 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.925 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:00.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:00.925 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.925 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:00.925 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:00.925 EAL: Ask a virtual area of 0x61000 bytes 00:05:00.925 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:00.925 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:00.925 EAL: Ask a virtual area of 0x400000000 bytes 00:05:00.925 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:00.925 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:00.925 EAL: Hugepages will be freed exactly as allocated. 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: TSC frequency is ~2700000 KHz 00:05:00.925 EAL: Main lcore 0 is ready (tid=7f1735099a00;cpuset=[0]) 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 0 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 2MB 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:00.925 EAL: Mem event callback 'spdk:(nil)' registered 00:05:00.925 00:05:00.925 00:05:00.925 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.925 http://cunit.sourceforge.net/ 00:05:00.925 00:05:00.925 00:05:00.925 Suite: components_suite 00:05:00.925 Test: vtophys_malloc_test ...passed 00:05:00.925 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 4 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 4MB 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was shrunk by 4MB 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 4 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 6MB 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was shrunk by 6MB 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 4 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 10MB 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was shrunk by 10MB 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 4 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 18MB 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was shrunk by 18MB 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 4 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 34MB 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was shrunk by 34MB 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 4 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 66MB 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was shrunk by 66MB 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.925 EAL: Restoring previous memory policy: 4 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was expanded by 130MB 00:05:00.925 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.925 EAL: request: mp_malloc_sync 00:05:00.925 EAL: No shared files mode enabled, IPC is disabled 00:05:00.925 EAL: Heap on socket 0 was shrunk by 130MB 00:05:00.925 EAL: Trying to obtain current memory policy. 00:05:00.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.183 EAL: Restoring previous memory policy: 4 00:05:01.183 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.183 EAL: request: mp_malloc_sync 00:05:01.183 EAL: No shared files mode enabled, IPC is disabled 00:05:01.183 EAL: Heap on socket 0 was expanded by 258MB 00:05:01.183 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.183 EAL: request: mp_malloc_sync 00:05:01.183 EAL: No shared files mode enabled, IPC is disabled 00:05:01.183 EAL: Heap on socket 0 was shrunk by 258MB 00:05:01.183 EAL: Trying to obtain current memory policy. 00:05:01.183 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.440 EAL: Restoring previous memory policy: 4 00:05:01.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.440 EAL: request: mp_malloc_sync 00:05:01.440 EAL: No shared files mode enabled, IPC is disabled 00:05:01.440 EAL: Heap on socket 0 was expanded by 514MB 00:05:01.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.440 EAL: request: mp_malloc_sync 00:05:01.440 EAL: No shared files mode enabled, IPC is disabled 00:05:01.440 EAL: Heap on socket 0 was shrunk by 514MB 00:05:01.440 EAL: Trying to obtain current memory policy. 00:05:01.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.005 EAL: Restoring previous memory policy: 4 00:05:02.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.005 EAL: request: mp_malloc_sync 00:05:02.005 EAL: No shared files mode enabled, IPC is disabled 00:05:02.005 EAL: Heap on socket 0 was expanded by 1026MB 00:05:02.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.281 EAL: request: mp_malloc_sync 00:05:02.281 EAL: No shared files mode enabled, IPC is disabled 00:05:02.281 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:02.281 passed 00:05:02.281 00:05:02.281 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.281 suites 1 1 n/a 0 0 00:05:02.281 tests 2 2 2 0 0 00:05:02.281 asserts 497 497 497 0 n/a 00:05:02.281 00:05:02.281 Elapsed time = 1.348 seconds 00:05:02.281 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.281 EAL: request: mp_malloc_sync 00:05:02.281 EAL: No shared files mode enabled, IPC is disabled 00:05:02.281 EAL: Heap on socket 0 was shrunk by 2MB 00:05:02.281 EAL: No shared files mode enabled, IPC is disabled 00:05:02.281 EAL: No shared files mode enabled, IPC is disabled 00:05:02.281 EAL: No shared files mode enabled, IPC is disabled 00:05:02.281 00:05:02.281 real 0m1.475s 00:05:02.281 user 0m0.842s 00:05:02.281 sys 0m0.597s 00:05:02.281 14:03:29 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.281 14:03:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:02.281 ************************************ 00:05:02.281 END TEST env_vtophys 00:05:02.281 ************************************ 00:05:02.281 14:03:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:02.281 14:03:29 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:02.281 14:03:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.281 14:03:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.281 ************************************ 00:05:02.281 START TEST env_pci 00:05:02.281 ************************************ 00:05:02.281 14:03:29 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:02.281 00:05:02.281 00:05:02.281 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.281 http://cunit.sourceforge.net/ 00:05:02.281 00:05:02.281 00:05:02.281 Suite: pci 00:05:02.281 Test: pci_hook ...[2024-07-24 14:03:29.577364] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4159736 has claimed it 00:05:02.281 EAL: Cannot find device (10000:00:01.0) 00:05:02.281 EAL: Failed to attach device on primary process 00:05:02.281 passed 00:05:02.281 00:05:02.281 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.281 suites 1 1 n/a 0 0 00:05:02.281 tests 1 1 1 0 0 00:05:02.281 asserts 25 25 25 0 n/a 00:05:02.281 00:05:02.281 Elapsed time = 0.026 seconds 00:05:02.281 00:05:02.281 real 0m0.039s 00:05:02.281 user 0m0.013s 00:05:02.281 sys 0m0.025s 00:05:02.281 14:03:29 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.281 14:03:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:02.281 ************************************ 00:05:02.281 END TEST env_pci 00:05:02.281 ************************************ 00:05:02.281 14:03:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:02.281 14:03:29 env -- env/env.sh@15 -- # uname 00:05:02.281 14:03:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:02.281 14:03:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:02.281 14:03:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:02.281 14:03:29 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:02.281 14:03:29 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.281 14:03:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 ************************************ 00:05:02.539 START TEST env_dpdk_post_init 00:05:02.539 ************************************ 00:05:02.539 14:03:29 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:02.539 EAL: Detected CPU lcores: 48 00:05:02.539 EAL: Detected NUMA nodes: 2 00:05:02.539 EAL: Detected shared linkage of DPDK 00:05:02.539 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:02.539 EAL: Selected IOVA mode 'VA' 00:05:02.539 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.539 EAL: VFIO support initialized 00:05:02.539 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:02.539 EAL: Using IOMMU type 1 (Type 1) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:02.539 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:02.798 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:02.798 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:02.798 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:02.798 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:02.798 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:03.364 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:05:06.642 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:05:06.642 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:05:06.900 Starting DPDK initialization... 00:05:06.900 Starting SPDK post initialization... 00:05:06.900 SPDK NVMe probe 00:05:06.900 Attaching to 0000:84:00.0 00:05:06.900 Attached to 0000:84:00.0 00:05:06.901 Cleaning up... 00:05:06.901 00:05:06.901 real 0m4.404s 00:05:06.901 user 0m3.265s 00:05:06.901 sys 0m0.200s 00:05:06.901 14:03:34 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.901 14:03:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.901 ************************************ 00:05:06.901 END TEST env_dpdk_post_init 00:05:06.901 ************************************ 00:05:06.901 14:03:34 env -- env/env.sh@26 -- # uname 00:05:06.901 14:03:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:06.901 14:03:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.901 14:03:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.901 14:03:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.901 14:03:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.901 ************************************ 00:05:06.901 START TEST env_mem_callbacks 00:05:06.901 ************************************ 00:05:06.901 14:03:34 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.901 EAL: Detected CPU lcores: 48 00:05:06.901 EAL: Detected NUMA nodes: 2 00:05:06.901 EAL: Detected shared linkage of DPDK 00:05:06.901 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.901 EAL: Selected IOVA mode 'VA' 00:05:06.901 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.901 EAL: VFIO support initialized 00:05:06.901 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.901 00:05:06.901 00:05:06.901 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.901 http://cunit.sourceforge.net/ 00:05:06.901 00:05:06.901 00:05:06.901 Suite: memory 00:05:06.901 Test: test ... 00:05:06.901 register 0x200000200000 2097152 00:05:06.901 malloc 3145728 00:05:06.901 register 0x200000400000 4194304 00:05:06.901 buf 0x200000500000 len 3145728 PASSED 00:05:06.901 malloc 64 00:05:06.901 buf 0x2000004fff40 len 64 PASSED 00:05:06.901 malloc 4194304 00:05:06.901 register 0x200000800000 6291456 00:05:06.901 buf 0x200000a00000 len 4194304 PASSED 00:05:06.901 free 0x200000500000 3145728 00:05:06.901 free 0x2000004fff40 64 00:05:06.901 unregister 0x200000400000 4194304 PASSED 00:05:06.901 free 0x200000a00000 4194304 00:05:06.901 unregister 0x200000800000 6291456 PASSED 00:05:06.901 malloc 8388608 00:05:06.901 register 0x200000400000 10485760 00:05:06.901 buf 0x200000600000 len 8388608 PASSED 00:05:06.901 free 0x200000600000 8388608 00:05:06.901 unregister 0x200000400000 10485760 PASSED 00:05:06.901 passed 00:05:06.901 00:05:06.901 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.901 suites 1 1 n/a 0 0 00:05:06.901 tests 1 1 1 0 0 00:05:06.901 asserts 15 15 15 0 n/a 00:05:06.901 00:05:06.901 Elapsed time = 0.005 seconds 00:05:06.901 00:05:06.901 real 0m0.054s 00:05:06.901 user 0m0.016s 00:05:06.901 sys 0m0.038s 00:05:06.901 14:03:34 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.901 14:03:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:06.901 ************************************ 00:05:06.901 END TEST env_mem_callbacks 00:05:06.901 ************************************ 00:05:06.901 00:05:06.901 real 0m6.410s 00:05:06.901 user 0m4.367s 00:05:06.901 sys 0m1.084s 00:05:06.901 14:03:34 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.901 14:03:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.901 ************************************ 00:05:06.901 END TEST env 00:05:06.901 ************************************ 00:05:06.901 14:03:34 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:06.901 14:03:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.901 14:03:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.901 14:03:34 -- common/autotest_common.sh@10 -- # set +x 00:05:06.901 ************************************ 00:05:06.901 START TEST rpc 00:05:06.901 ************************************ 00:05:06.901 14:03:34 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:07.160 * Looking for test storage... 00:05:07.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:07.160 14:03:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4160394 00:05:07.160 14:03:34 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:07.160 14:03:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.160 14:03:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4160394 00:05:07.160 14:03:34 rpc -- common/autotest_common.sh@827 -- # '[' -z 4160394 ']' 00:05:07.160 14:03:34 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.160 14:03:34 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:07.160 14:03:34 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.160 14:03:34 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:07.160 14:03:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.160 [2024-07-24 14:03:34.336623] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:07.160 [2024-07-24 14:03:34.336719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160394 ] 00:05:07.160 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.160 [2024-07-24 14:03:34.405756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.160 [2024-07-24 14:03:34.489936] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:07.160 [2024-07-24 14:03:34.489994] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4160394' to capture a snapshot of events at runtime. 00:05:07.160 [2024-07-24 14:03:34.490023] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:07.160 [2024-07-24 14:03:34.490035] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:07.160 [2024-07-24 14:03:34.490045] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4160394 for offline analysis/debug. 00:05:07.160 [2024-07-24 14:03:34.490087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.417 14:03:34 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.417 14:03:34 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:07.417 14:03:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:07.417 14:03:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:07.417 14:03:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:07.417 14:03:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:07.417 14:03:34 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.417 14:03:34 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.417 14:03:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.417 ************************************ 00:05:07.417 START TEST rpc_integrity 00:05:07.417 ************************************ 00:05:07.417 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:07.417 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.417 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.417 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.418 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.418 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.418 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.675 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.675 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.675 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.675 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.675 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.675 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:07.675 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.675 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.675 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.675 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.675 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.675 { 00:05:07.675 "name": "Malloc0", 00:05:07.675 "aliases": [ 00:05:07.675 "f333eae7-8cc5-4599-ab01-9cb9367ba512" 00:05:07.675 ], 00:05:07.675 "product_name": "Malloc disk", 00:05:07.675 "block_size": 512, 00:05:07.675 "num_blocks": 16384, 00:05:07.675 "uuid": "f333eae7-8cc5-4599-ab01-9cb9367ba512", 00:05:07.675 "assigned_rate_limits": { 00:05:07.675 "rw_ios_per_sec": 0, 00:05:07.675 "rw_mbytes_per_sec": 0, 00:05:07.675 "r_mbytes_per_sec": 0, 00:05:07.675 "w_mbytes_per_sec": 0 00:05:07.675 }, 00:05:07.675 "claimed": false, 00:05:07.675 "zoned": false, 00:05:07.675 "supported_io_types": { 00:05:07.675 "read": true, 00:05:07.675 "write": true, 00:05:07.675 "unmap": true, 00:05:07.675 "write_zeroes": true, 00:05:07.675 "flush": true, 00:05:07.675 "reset": true, 00:05:07.675 "compare": false, 00:05:07.675 "compare_and_write": false, 00:05:07.675 "abort": true, 00:05:07.675 "nvme_admin": false, 00:05:07.675 "nvme_io": false 00:05:07.675 }, 00:05:07.675 "memory_domains": [ 00:05:07.675 { 00:05:07.675 "dma_device_id": "system", 00:05:07.675 "dma_device_type": 1 00:05:07.675 }, 00:05:07.675 { 00:05:07.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.675 "dma_device_type": 2 00:05:07.676 } 00:05:07.676 ], 00:05:07.676 "driver_specific": {} 00:05:07.676 } 00:05:07.676 ]' 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.676 [2024-07-24 14:03:34.865195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:07.676 [2024-07-24 14:03:34.865238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.676 [2024-07-24 14:03:34.865262] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23be7a0 00:05:07.676 [2024-07-24 14:03:34.865277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.676 [2024-07-24 14:03:34.866776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.676 [2024-07-24 14:03:34.866813] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.676 Passthru0 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.676 { 00:05:07.676 "name": "Malloc0", 00:05:07.676 "aliases": [ 00:05:07.676 "f333eae7-8cc5-4599-ab01-9cb9367ba512" 00:05:07.676 ], 00:05:07.676 "product_name": "Malloc disk", 00:05:07.676 "block_size": 512, 00:05:07.676 "num_blocks": 16384, 00:05:07.676 "uuid": "f333eae7-8cc5-4599-ab01-9cb9367ba512", 00:05:07.676 "assigned_rate_limits": { 00:05:07.676 "rw_ios_per_sec": 0, 00:05:07.676 "rw_mbytes_per_sec": 0, 00:05:07.676 "r_mbytes_per_sec": 0, 00:05:07.676 "w_mbytes_per_sec": 0 00:05:07.676 }, 00:05:07.676 "claimed": true, 00:05:07.676 "claim_type": "exclusive_write", 00:05:07.676 "zoned": false, 00:05:07.676 "supported_io_types": { 00:05:07.676 "read": true, 00:05:07.676 "write": true, 00:05:07.676 "unmap": true, 00:05:07.676 "write_zeroes": true, 00:05:07.676 "flush": true, 00:05:07.676 "reset": true, 00:05:07.676 "compare": false, 00:05:07.676 "compare_and_write": false, 00:05:07.676 "abort": true, 00:05:07.676 "nvme_admin": false, 00:05:07.676 "nvme_io": false 00:05:07.676 }, 00:05:07.676 "memory_domains": [ 00:05:07.676 { 00:05:07.676 "dma_device_id": "system", 00:05:07.676 "dma_device_type": 1 00:05:07.676 }, 00:05:07.676 { 00:05:07.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.676 "dma_device_type": 2 00:05:07.676 } 00:05:07.676 ], 00:05:07.676 "driver_specific": {} 00:05:07.676 }, 00:05:07.676 { 00:05:07.676 "name": "Passthru0", 00:05:07.676 "aliases": [ 00:05:07.676 "66e26c7e-4438-5161-b5bb-310751f9159a" 00:05:07.676 ], 00:05:07.676 "product_name": "passthru", 00:05:07.676 "block_size": 512, 00:05:07.676 "num_blocks": 16384, 00:05:07.676 "uuid": "66e26c7e-4438-5161-b5bb-310751f9159a", 00:05:07.676 "assigned_rate_limits": { 00:05:07.676 "rw_ios_per_sec": 0, 00:05:07.676 "rw_mbytes_per_sec": 0, 00:05:07.676 "r_mbytes_per_sec": 0, 00:05:07.676 "w_mbytes_per_sec": 0 00:05:07.676 }, 00:05:07.676 "claimed": false, 00:05:07.676 "zoned": false, 00:05:07.676 "supported_io_types": { 00:05:07.676 "read": true, 00:05:07.676 "write": true, 00:05:07.676 "unmap": true, 00:05:07.676 "write_zeroes": true, 00:05:07.676 "flush": true, 00:05:07.676 "reset": true, 00:05:07.676 "compare": false, 00:05:07.676 "compare_and_write": false, 00:05:07.676 "abort": true, 00:05:07.676 "nvme_admin": false, 00:05:07.676 "nvme_io": false 00:05:07.676 }, 00:05:07.676 "memory_domains": [ 00:05:07.676 { 00:05:07.676 "dma_device_id": "system", 00:05:07.676 "dma_device_type": 1 00:05:07.676 }, 00:05:07.676 { 00:05:07.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.676 "dma_device_type": 2 00:05:07.676 } 00:05:07.676 ], 00:05:07.676 "driver_specific": { 00:05:07.676 "passthru": { 00:05:07.676 "name": "Passthru0", 00:05:07.676 "base_bdev_name": "Malloc0" 00:05:07.676 } 00:05:07.676 } 00:05:07.676 } 00:05:07.676 ]' 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.676 14:03:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.676 00:05:07.676 real 0m0.224s 00:05:07.676 user 0m0.147s 00:05:07.676 sys 0m0.023s 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.676 14:03:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.676 ************************************ 00:05:07.676 END TEST rpc_integrity 00:05:07.676 ************************************ 00:05:07.676 14:03:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:07.677 14:03:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.677 14:03:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.677 14:03:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.677 ************************************ 00:05:07.677 START TEST rpc_plugins 00:05:07.677 ************************************ 00:05:07.677 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:07.677 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:07.677 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.677 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.677 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.677 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:07.677 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:07.677 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.677 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.934 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.934 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:07.934 { 00:05:07.934 "name": "Malloc1", 00:05:07.934 "aliases": [ 00:05:07.934 "ea0d8015-dbf0-43dc-ab77-5a92a1e520e7" 00:05:07.934 ], 00:05:07.934 "product_name": "Malloc disk", 00:05:07.934 "block_size": 4096, 00:05:07.934 "num_blocks": 256, 00:05:07.934 "uuid": "ea0d8015-dbf0-43dc-ab77-5a92a1e520e7", 00:05:07.934 "assigned_rate_limits": { 00:05:07.934 "rw_ios_per_sec": 0, 00:05:07.934 "rw_mbytes_per_sec": 0, 00:05:07.934 "r_mbytes_per_sec": 0, 00:05:07.934 "w_mbytes_per_sec": 0 00:05:07.934 }, 00:05:07.934 "claimed": false, 00:05:07.934 "zoned": false, 00:05:07.934 "supported_io_types": { 00:05:07.934 "read": true, 00:05:07.934 "write": true, 00:05:07.934 "unmap": true, 00:05:07.934 "write_zeroes": true, 00:05:07.934 "flush": true, 00:05:07.934 "reset": true, 00:05:07.934 "compare": false, 00:05:07.934 "compare_and_write": false, 00:05:07.934 "abort": true, 00:05:07.934 "nvme_admin": false, 00:05:07.934 "nvme_io": false 00:05:07.935 }, 00:05:07.935 "memory_domains": [ 00:05:07.935 { 00:05:07.935 "dma_device_id": "system", 00:05:07.935 "dma_device_type": 1 00:05:07.935 }, 00:05:07.935 { 00:05:07.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.935 "dma_device_type": 2 00:05:07.935 } 00:05:07.935 ], 00:05:07.935 "driver_specific": {} 00:05:07.935 } 00:05:07.935 ]' 00:05:07.935 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:07.935 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:07.935 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.935 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.935 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:07.935 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:07.935 14:03:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:07.935 00:05:07.935 real 0m0.113s 00:05:07.935 user 0m0.073s 00:05:07.935 sys 0m0.010s 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.935 14:03:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.935 ************************************ 00:05:07.935 END TEST rpc_plugins 00:05:07.935 ************************************ 00:05:07.935 14:03:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:07.935 14:03:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.935 14:03:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.935 14:03:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.935 ************************************ 00:05:07.935 START TEST rpc_trace_cmd_test 00:05:07.935 ************************************ 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:07.935 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4160394", 00:05:07.935 "tpoint_group_mask": "0x8", 00:05:07.935 "iscsi_conn": { 00:05:07.935 "mask": "0x2", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "scsi": { 00:05:07.935 "mask": "0x4", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "bdev": { 00:05:07.935 "mask": "0x8", 00:05:07.935 "tpoint_mask": "0xffffffffffffffff" 00:05:07.935 }, 00:05:07.935 "nvmf_rdma": { 00:05:07.935 "mask": "0x10", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "nvmf_tcp": { 00:05:07.935 "mask": "0x20", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "ftl": { 00:05:07.935 "mask": "0x40", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "blobfs": { 00:05:07.935 "mask": "0x80", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "dsa": { 00:05:07.935 "mask": "0x200", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "thread": { 00:05:07.935 "mask": "0x400", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "nvme_pcie": { 00:05:07.935 "mask": "0x800", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "iaa": { 00:05:07.935 "mask": "0x1000", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "nvme_tcp": { 00:05:07.935 "mask": "0x2000", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "bdev_nvme": { 00:05:07.935 "mask": "0x4000", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 }, 00:05:07.935 "sock": { 00:05:07.935 "mask": "0x8000", 00:05:07.935 "tpoint_mask": "0x0" 00:05:07.935 } 00:05:07.935 }' 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:07.935 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:08.193 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:08.193 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:08.193 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:08.193 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:08.193 14:03:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:08.193 00:05:08.193 real 0m0.198s 00:05:08.193 user 0m0.180s 00:05:08.193 sys 0m0.012s 00:05:08.193 14:03:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.193 14:03:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.193 ************************************ 00:05:08.193 END TEST rpc_trace_cmd_test 00:05:08.193 ************************************ 00:05:08.193 14:03:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:08.193 14:03:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:08.193 14:03:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:08.193 14:03:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.193 14:03:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.193 14:03:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.193 ************************************ 00:05:08.193 START TEST rpc_daemon_integrity 00:05:08.193 ************************************ 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.193 { 00:05:08.193 "name": "Malloc2", 00:05:08.193 "aliases": [ 00:05:08.193 "90bda1e7-9d7a-4e91-a551-5493637d9236" 00:05:08.193 ], 00:05:08.193 "product_name": "Malloc disk", 00:05:08.193 "block_size": 512, 00:05:08.193 "num_blocks": 16384, 00:05:08.193 "uuid": "90bda1e7-9d7a-4e91-a551-5493637d9236", 00:05:08.193 "assigned_rate_limits": { 00:05:08.193 "rw_ios_per_sec": 0, 00:05:08.193 "rw_mbytes_per_sec": 0, 00:05:08.193 "r_mbytes_per_sec": 0, 00:05:08.193 "w_mbytes_per_sec": 0 00:05:08.193 }, 00:05:08.193 "claimed": false, 00:05:08.193 "zoned": false, 00:05:08.193 "supported_io_types": { 00:05:08.193 "read": true, 00:05:08.193 "write": true, 00:05:08.193 "unmap": true, 00:05:08.193 "write_zeroes": true, 00:05:08.193 "flush": true, 00:05:08.193 "reset": true, 00:05:08.193 "compare": false, 00:05:08.193 "compare_and_write": false, 00:05:08.193 "abort": true, 00:05:08.193 "nvme_admin": false, 00:05:08.193 "nvme_io": false 00:05:08.193 }, 00:05:08.193 "memory_domains": [ 00:05:08.193 { 00:05:08.193 "dma_device_id": "system", 00:05:08.193 "dma_device_type": 1 00:05:08.193 }, 00:05:08.193 { 00:05:08.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.193 "dma_device_type": 2 00:05:08.193 } 00:05:08.193 ], 00:05:08.193 "driver_specific": {} 00:05:08.193 } 00:05:08.193 ]' 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.193 [2024-07-24 14:03:35.539192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:08.193 [2024-07-24 14:03:35.539236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.193 [2024-07-24 14:03:35.539262] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x220d9d0 00:05:08.193 [2024-07-24 14:03:35.539285] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.193 [2024-07-24 14:03:35.540615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.193 [2024-07-24 14:03:35.540643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.193 Passthru0 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.193 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.193 { 00:05:08.193 "name": "Malloc2", 00:05:08.193 "aliases": [ 00:05:08.193 "90bda1e7-9d7a-4e91-a551-5493637d9236" 00:05:08.193 ], 00:05:08.193 "product_name": "Malloc disk", 00:05:08.193 "block_size": 512, 00:05:08.193 "num_blocks": 16384, 00:05:08.193 "uuid": "90bda1e7-9d7a-4e91-a551-5493637d9236", 00:05:08.193 "assigned_rate_limits": { 00:05:08.193 "rw_ios_per_sec": 0, 00:05:08.193 "rw_mbytes_per_sec": 0, 00:05:08.193 "r_mbytes_per_sec": 0, 00:05:08.193 "w_mbytes_per_sec": 0 00:05:08.193 }, 00:05:08.193 "claimed": true, 00:05:08.193 "claim_type": "exclusive_write", 00:05:08.193 "zoned": false, 00:05:08.193 "supported_io_types": { 00:05:08.193 "read": true, 00:05:08.193 "write": true, 00:05:08.193 "unmap": true, 00:05:08.193 "write_zeroes": true, 00:05:08.193 "flush": true, 00:05:08.193 "reset": true, 00:05:08.193 "compare": false, 00:05:08.193 "compare_and_write": false, 00:05:08.193 "abort": true, 00:05:08.193 "nvme_admin": false, 00:05:08.193 "nvme_io": false 00:05:08.193 }, 00:05:08.193 "memory_domains": [ 00:05:08.193 { 00:05:08.193 "dma_device_id": "system", 00:05:08.193 "dma_device_type": 1 00:05:08.193 }, 00:05:08.193 { 00:05:08.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.193 "dma_device_type": 2 00:05:08.193 } 00:05:08.193 ], 00:05:08.193 "driver_specific": {} 00:05:08.193 }, 00:05:08.193 { 00:05:08.193 "name": "Passthru0", 00:05:08.193 "aliases": [ 00:05:08.193 "433eb907-91b6-51d7-b69f-6325d2322278" 00:05:08.193 ], 00:05:08.193 "product_name": "passthru", 00:05:08.193 "block_size": 512, 00:05:08.193 "num_blocks": 16384, 00:05:08.193 "uuid": "433eb907-91b6-51d7-b69f-6325d2322278", 00:05:08.193 "assigned_rate_limits": { 00:05:08.193 "rw_ios_per_sec": 0, 00:05:08.193 "rw_mbytes_per_sec": 0, 00:05:08.193 "r_mbytes_per_sec": 0, 00:05:08.193 "w_mbytes_per_sec": 0 00:05:08.193 }, 00:05:08.193 "claimed": false, 00:05:08.194 "zoned": false, 00:05:08.194 "supported_io_types": { 00:05:08.194 "read": true, 00:05:08.194 "write": true, 00:05:08.194 "unmap": true, 00:05:08.194 "write_zeroes": true, 00:05:08.194 "flush": true, 00:05:08.194 "reset": true, 00:05:08.194 "compare": false, 00:05:08.194 "compare_and_write": false, 00:05:08.194 "abort": true, 00:05:08.194 "nvme_admin": false, 00:05:08.194 "nvme_io": false 00:05:08.194 }, 00:05:08.194 "memory_domains": [ 00:05:08.194 { 00:05:08.194 "dma_device_id": "system", 00:05:08.194 "dma_device_type": 1 00:05:08.194 }, 00:05:08.194 { 00:05:08.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.194 "dma_device_type": 2 00:05:08.194 } 00:05:08.194 ], 00:05:08.194 "driver_specific": { 00:05:08.194 "passthru": { 00:05:08.194 "name": "Passthru0", 00:05:08.194 "base_bdev_name": "Malloc2" 00:05:08.194 } 00:05:08.194 } 00:05:08.194 } 00:05:08.194 ]' 00:05:08.194 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.451 00:05:08.451 real 0m0.224s 00:05:08.451 user 0m0.149s 00:05:08.451 sys 0m0.022s 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.451 14:03:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.451 ************************************ 00:05:08.451 END TEST rpc_daemon_integrity 00:05:08.451 ************************************ 00:05:08.451 14:03:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:08.451 14:03:35 rpc -- rpc/rpc.sh@84 -- # killprocess 4160394 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@946 -- # '[' -z 4160394 ']' 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@950 -- # kill -0 4160394 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@951 -- # uname 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4160394 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4160394' 00:05:08.451 killing process with pid 4160394 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@965 -- # kill 4160394 00:05:08.451 14:03:35 rpc -- common/autotest_common.sh@970 -- # wait 4160394 00:05:09.016 00:05:09.016 real 0m1.867s 00:05:09.016 user 0m2.356s 00:05:09.016 sys 0m0.577s 00:05:09.016 14:03:36 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.016 14:03:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.016 ************************************ 00:05:09.016 END TEST rpc 00:05:09.016 ************************************ 00:05:09.016 14:03:36 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:09.016 14:03:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.016 14:03:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.016 14:03:36 -- common/autotest_common.sh@10 -- # set +x 00:05:09.016 ************************************ 00:05:09.016 START TEST skip_rpc 00:05:09.016 ************************************ 00:05:09.016 14:03:36 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:09.016 * Looking for test storage... 00:05:09.016 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:09.016 14:03:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:09.016 14:03:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:09.016 14:03:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:09.016 14:03:36 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.016 14:03:36 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.016 14:03:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.016 ************************************ 00:05:09.016 START TEST skip_rpc 00:05:09.016 ************************************ 00:05:09.016 14:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:09.016 14:03:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4160827 00:05:09.016 14:03:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:09.016 14:03:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.016 14:03:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:09.016 [2024-07-24 14:03:36.273767] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:09.016 [2024-07-24 14:03:36.273872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160827 ] 00:05:09.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.016 [2024-07-24 14:03:36.340007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.273 [2024-07-24 14:03:36.429608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.530 14:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:14.530 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:14.530 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4160827 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 4160827 ']' 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 4160827 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4160827 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4160827' 00:05:14.531 killing process with pid 4160827 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 4160827 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 4160827 00:05:14.531 00:05:14.531 real 0m5.443s 00:05:14.531 user 0m5.121s 00:05:14.531 sys 0m0.324s 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.531 14:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 ************************************ 00:05:14.531 END TEST skip_rpc 00:05:14.531 ************************************ 00:05:14.531 14:03:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:14.531 14:03:41 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.531 14:03:41 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.531 14:03:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 ************************************ 00:05:14.531 START TEST skip_rpc_with_json 00:05:14.531 ************************************ 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4161519 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4161519 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 4161519 ']' 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.531 14:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 [2024-07-24 14:03:41.761464] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:14.531 [2024-07-24 14:03:41.761568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161519 ] 00:05:14.531 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.531 [2024-07-24 14:03:41.827855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.788 [2024-07-24 14:03:41.917634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 [2024-07-24 14:03:42.175575] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:15.046 request: 00:05:15.046 { 00:05:15.046 "trtype": "tcp", 00:05:15.046 "method": "nvmf_get_transports", 00:05:15.046 "req_id": 1 00:05:15.046 } 00:05:15.046 Got JSON-RPC error response 00:05:15.046 response: 00:05:15.046 { 00:05:15.046 "code": -19, 00:05:15.046 "message": "No such device" 00:05:15.046 } 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 [2024-07-24 14:03:42.183692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.046 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:15.046 { 00:05:15.046 "subsystems": [ 00:05:15.046 { 00:05:15.046 "subsystem": "keyring", 00:05:15.046 "config": [] 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "subsystem": "iobuf", 00:05:15.046 "config": [ 00:05:15.046 { 00:05:15.046 "method": "iobuf_set_options", 00:05:15.046 "params": { 00:05:15.046 "small_pool_count": 8192, 00:05:15.046 "large_pool_count": 1024, 00:05:15.046 "small_bufsize": 8192, 00:05:15.046 "large_bufsize": 135168 00:05:15.046 } 00:05:15.046 } 00:05:15.046 ] 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "subsystem": "sock", 00:05:15.046 "config": [ 00:05:15.046 { 00:05:15.046 "method": "sock_set_default_impl", 00:05:15.046 "params": { 00:05:15.046 "impl_name": "posix" 00:05:15.046 } 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "method": "sock_impl_set_options", 00:05:15.046 "params": { 00:05:15.046 "impl_name": "ssl", 00:05:15.046 "recv_buf_size": 4096, 00:05:15.046 "send_buf_size": 4096, 00:05:15.046 "enable_recv_pipe": true, 00:05:15.046 "enable_quickack": false, 00:05:15.046 "enable_placement_id": 0, 00:05:15.046 "enable_zerocopy_send_server": true, 00:05:15.046 "enable_zerocopy_send_client": false, 00:05:15.046 "zerocopy_threshold": 0, 00:05:15.046 "tls_version": 0, 00:05:15.046 "enable_ktls": false 00:05:15.046 } 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "method": "sock_impl_set_options", 00:05:15.046 "params": { 00:05:15.046 "impl_name": "posix", 00:05:15.046 "recv_buf_size": 2097152, 00:05:15.046 "send_buf_size": 2097152, 00:05:15.046 "enable_recv_pipe": true, 00:05:15.046 "enable_quickack": false, 00:05:15.046 "enable_placement_id": 0, 00:05:15.046 "enable_zerocopy_send_server": true, 00:05:15.046 "enable_zerocopy_send_client": false, 00:05:15.046 "zerocopy_threshold": 0, 00:05:15.046 "tls_version": 0, 00:05:15.046 "enable_ktls": false 00:05:15.046 } 00:05:15.046 } 00:05:15.046 ] 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "subsystem": "vmd", 00:05:15.046 "config": [] 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "subsystem": "accel", 00:05:15.046 "config": [ 00:05:15.046 { 00:05:15.046 "method": "accel_set_options", 00:05:15.046 "params": { 00:05:15.046 "small_cache_size": 128, 00:05:15.046 "large_cache_size": 16, 00:05:15.046 "task_count": 2048, 00:05:15.046 "sequence_count": 2048, 00:05:15.046 "buf_count": 2048 00:05:15.046 } 00:05:15.046 } 00:05:15.046 ] 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "subsystem": "bdev", 00:05:15.046 "config": [ 00:05:15.046 { 00:05:15.046 "method": "bdev_set_options", 00:05:15.046 "params": { 00:05:15.046 "bdev_io_pool_size": 65535, 00:05:15.046 "bdev_io_cache_size": 256, 00:05:15.046 "bdev_auto_examine": true, 00:05:15.046 "iobuf_small_cache_size": 128, 00:05:15.046 "iobuf_large_cache_size": 16 00:05:15.046 } 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "method": "bdev_raid_set_options", 00:05:15.046 "params": { 00:05:15.046 "process_window_size_kb": 1024 00:05:15.046 } 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "method": "bdev_iscsi_set_options", 00:05:15.046 "params": { 00:05:15.046 "timeout_sec": 30 00:05:15.046 } 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "method": "bdev_nvme_set_options", 00:05:15.046 "params": { 00:05:15.046 "action_on_timeout": "none", 00:05:15.046 "timeout_us": 0, 00:05:15.046 "timeout_admin_us": 0, 00:05:15.046 "keep_alive_timeout_ms": 10000, 00:05:15.046 "arbitration_burst": 0, 00:05:15.046 "low_priority_weight": 0, 00:05:15.046 "medium_priority_weight": 0, 00:05:15.046 "high_priority_weight": 0, 00:05:15.046 "nvme_adminq_poll_period_us": 10000, 00:05:15.046 "nvme_ioq_poll_period_us": 0, 00:05:15.046 "io_queue_requests": 0, 00:05:15.046 "delay_cmd_submit": true, 00:05:15.046 "transport_retry_count": 4, 00:05:15.046 "bdev_retry_count": 3, 00:05:15.046 "transport_ack_timeout": 0, 00:05:15.046 "ctrlr_loss_timeout_sec": 0, 00:05:15.046 "reconnect_delay_sec": 0, 00:05:15.046 "fast_io_fail_timeout_sec": 0, 00:05:15.046 "disable_auto_failback": false, 00:05:15.046 "generate_uuids": false, 00:05:15.046 "transport_tos": 0, 00:05:15.046 "nvme_error_stat": false, 00:05:15.046 "rdma_srq_size": 0, 00:05:15.046 "io_path_stat": false, 00:05:15.046 "allow_accel_sequence": false, 00:05:15.046 "rdma_max_cq_size": 0, 00:05:15.046 "rdma_cm_event_timeout_ms": 0, 00:05:15.046 "dhchap_digests": [ 00:05:15.046 "sha256", 00:05:15.046 "sha384", 00:05:15.046 "sha512" 00:05:15.046 ], 00:05:15.046 "dhchap_dhgroups": [ 00:05:15.046 "null", 00:05:15.046 "ffdhe2048", 00:05:15.046 "ffdhe3072", 00:05:15.046 "ffdhe4096", 00:05:15.046 "ffdhe6144", 00:05:15.046 "ffdhe8192" 00:05:15.046 ] 00:05:15.046 } 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "method": "bdev_nvme_set_hotplug", 00:05:15.046 "params": { 00:05:15.046 "period_us": 100000, 00:05:15.046 "enable": false 00:05:15.046 } 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "method": "bdev_wait_for_examine" 00:05:15.046 } 00:05:15.046 ] 00:05:15.046 }, 00:05:15.046 { 00:05:15.046 "subsystem": "scsi", 00:05:15.046 "config": null 00:05:15.046 }, 00:05:15.046 { 00:05:15.047 "subsystem": "scheduler", 00:05:15.047 "config": [ 00:05:15.047 { 00:05:15.047 "method": "framework_set_scheduler", 00:05:15.047 "params": { 00:05:15.047 "name": "static" 00:05:15.047 } 00:05:15.047 } 00:05:15.047 ] 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "subsystem": "vhost_scsi", 00:05:15.047 "config": [] 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "subsystem": "vhost_blk", 00:05:15.047 "config": [] 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "subsystem": "ublk", 00:05:15.047 "config": [] 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "subsystem": "nbd", 00:05:15.047 "config": [] 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "subsystem": "nvmf", 00:05:15.047 "config": [ 00:05:15.047 { 00:05:15.047 "method": "nvmf_set_config", 00:05:15.047 "params": { 00:05:15.047 "discovery_filter": "match_any", 00:05:15.047 "admin_cmd_passthru": { 00:05:15.047 "identify_ctrlr": false 00:05:15.047 } 00:05:15.047 } 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "method": "nvmf_set_max_subsystems", 00:05:15.047 "params": { 00:05:15.047 "max_subsystems": 1024 00:05:15.047 } 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "method": "nvmf_set_crdt", 00:05:15.047 "params": { 00:05:15.047 "crdt1": 0, 00:05:15.047 "crdt2": 0, 00:05:15.047 "crdt3": 0 00:05:15.047 } 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "method": "nvmf_create_transport", 00:05:15.047 "params": { 00:05:15.047 "trtype": "TCP", 00:05:15.047 "max_queue_depth": 128, 00:05:15.047 "max_io_qpairs_per_ctrlr": 127, 00:05:15.047 "in_capsule_data_size": 4096, 00:05:15.047 "max_io_size": 131072, 00:05:15.047 "io_unit_size": 131072, 00:05:15.047 "max_aq_depth": 128, 00:05:15.047 "num_shared_buffers": 511, 00:05:15.047 "buf_cache_size": 4294967295, 00:05:15.047 "dif_insert_or_strip": false, 00:05:15.047 "zcopy": false, 00:05:15.047 "c2h_success": true, 00:05:15.047 "sock_priority": 0, 00:05:15.047 "abort_timeout_sec": 1, 00:05:15.047 "ack_timeout": 0, 00:05:15.047 "data_wr_pool_size": 0 00:05:15.047 } 00:05:15.047 } 00:05:15.047 ] 00:05:15.047 }, 00:05:15.047 { 00:05:15.047 "subsystem": "iscsi", 00:05:15.047 "config": [ 00:05:15.047 { 00:05:15.047 "method": "iscsi_set_options", 00:05:15.047 "params": { 00:05:15.047 "node_base": "iqn.2016-06.io.spdk", 00:05:15.047 "max_sessions": 128, 00:05:15.047 "max_connections_per_session": 2, 00:05:15.047 "max_queue_depth": 64, 00:05:15.047 "default_time2wait": 2, 00:05:15.047 "default_time2retain": 20, 00:05:15.047 "first_burst_length": 8192, 00:05:15.047 "immediate_data": true, 00:05:15.047 "allow_duplicated_isid": false, 00:05:15.047 "error_recovery_level": 0, 00:05:15.047 "nop_timeout": 60, 00:05:15.047 "nop_in_interval": 30, 00:05:15.047 "disable_chap": false, 00:05:15.047 "require_chap": false, 00:05:15.047 "mutual_chap": false, 00:05:15.047 "chap_group": 0, 00:05:15.047 "max_large_datain_per_connection": 64, 00:05:15.047 "max_r2t_per_connection": 4, 00:05:15.047 "pdu_pool_size": 36864, 00:05:15.047 "immediate_data_pool_size": 16384, 00:05:15.047 "data_out_pool_size": 2048 00:05:15.047 } 00:05:15.047 } 00:05:15.047 ] 00:05:15.047 } 00:05:15.047 ] 00:05:15.047 } 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4161519 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 4161519 ']' 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 4161519 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4161519 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4161519' 00:05:15.047 killing process with pid 4161519 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 4161519 00:05:15.047 14:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 4161519 00:05:15.611 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4161659 00:05:15.611 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:15.611 14:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4161659 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 4161659 ']' 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 4161659 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4161659 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.891 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4161659' 00:05:20.891 killing process with pid 4161659 00:05:20.892 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 4161659 00:05:20.892 14:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 4161659 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:20.892 00:05:20.892 real 0m6.459s 00:05:20.892 user 0m6.029s 00:05:20.892 sys 0m0.720s 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.892 ************************************ 00:05:20.892 END TEST skip_rpc_with_json 00:05:20.892 ************************************ 00:05:20.892 14:03:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:20.892 14:03:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.892 14:03:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.892 14:03:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.892 ************************************ 00:05:20.892 START TEST skip_rpc_with_delay 00:05:20.892 ************************************ 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:20.892 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.150 [2024-07-24 14:03:48.271769] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:21.150 [2024-07-24 14:03:48.271906] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:21.150 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:21.150 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.150 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.150 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.150 00:05:21.150 real 0m0.065s 00:05:21.150 user 0m0.044s 00:05:21.150 sys 0m0.021s 00:05:21.150 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.150 14:03:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:21.150 ************************************ 00:05:21.150 END TEST skip_rpc_with_delay 00:05:21.150 ************************************ 00:05:21.150 14:03:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:21.150 14:03:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:21.150 14:03:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:21.150 14:03:48 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.150 14:03:48 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.150 14:03:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.150 ************************************ 00:05:21.150 START TEST exit_on_failed_rpc_init 00:05:21.150 ************************************ 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4162368 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4162368 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 4162368 ']' 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.150 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.150 [2024-07-24 14:03:48.383934] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:21.150 [2024-07-24 14:03:48.384022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162368 ] 00:05:21.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.150 [2024-07-24 14:03:48.452417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.408 [2024-07-24 14:03:48.547263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:21.666 14:03:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:21.666 [2024-07-24 14:03:48.852491] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:21.666 [2024-07-24 14:03:48.852561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162386 ] 00:05:21.666 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.666 [2024-07-24 14:03:48.921381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.666 [2024-07-24 14:03:49.016294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.666 [2024-07-24 14:03:49.016436] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:21.666 [2024-07-24 14:03:49.016457] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:21.666 [2024-07-24 14:03:49.016471] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4162368 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 4162368 ']' 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 4162368 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4162368 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4162368' 00:05:21.924 killing process with pid 4162368 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 4162368 00:05:21.924 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 4162368 00:05:22.181 00:05:22.181 real 0m1.205s 00:05:22.181 user 0m1.335s 00:05:22.181 sys 0m0.457s 00:05:22.181 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.181 14:03:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.181 ************************************ 00:05:22.181 END TEST exit_on_failed_rpc_init 00:05:22.181 ************************************ 00:05:22.439 14:03:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:22.439 00:05:22.439 real 0m13.419s 00:05:22.439 user 0m12.630s 00:05:22.439 sys 0m1.682s 00:05:22.439 14:03:49 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.439 14:03:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.439 ************************************ 00:05:22.439 END TEST skip_rpc 00:05:22.439 ************************************ 00:05:22.439 14:03:49 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:22.439 14:03:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.439 14:03:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.439 14:03:49 -- common/autotest_common.sh@10 -- # set +x 00:05:22.439 ************************************ 00:05:22.439 START TEST rpc_client 00:05:22.439 ************************************ 00:05:22.439 14:03:49 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:22.439 * Looking for test storage... 00:05:22.440 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:22.440 14:03:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:22.440 OK 00:05:22.440 14:03:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:22.440 00:05:22.440 real 0m0.068s 00:05:22.440 user 0m0.028s 00:05:22.440 sys 0m0.045s 00:05:22.440 14:03:49 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.440 14:03:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:22.440 ************************************ 00:05:22.440 END TEST rpc_client 00:05:22.440 ************************************ 00:05:22.440 14:03:49 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:22.440 14:03:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.440 14:03:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.440 14:03:49 -- common/autotest_common.sh@10 -- # set +x 00:05:22.440 ************************************ 00:05:22.440 START TEST json_config 00:05:22.440 ************************************ 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:22.440 14:03:49 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.440 14:03:49 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.440 14:03:49 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.440 14:03:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.440 14:03:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.440 14:03:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.440 14:03:49 json_config -- paths/export.sh@5 -- # export PATH 00:05:22.440 14:03:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@47 -- # : 0 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:22.440 14:03:49 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:22.440 INFO: JSON configuration test init 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.440 14:03:49 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:22.440 14:03:49 json_config -- json_config/common.sh@9 -- # local app=target 00:05:22.440 14:03:49 json_config -- json_config/common.sh@10 -- # shift 00:05:22.440 14:03:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.440 14:03:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.440 14:03:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.440 14:03:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.440 14:03:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.440 14:03:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4162628 00:05:22.440 14:03:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:22.440 14:03:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.440 Waiting for target to run... 00:05:22.440 14:03:49 json_config -- json_config/common.sh@25 -- # waitforlisten 4162628 /var/tmp/spdk_tgt.sock 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@827 -- # '[' -z 4162628 ']' 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.440 14:03:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.698 [2024-07-24 14:03:49.828478] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:22.698 [2024-07-24 14:03:49.828584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162628 ] 00:05:22.698 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.956 [2024-07-24 14:03:50.178050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.956 [2024-07-24 14:03:50.238968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.522 14:03:50 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.522 14:03:50 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:23.522 14:03:50 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.522 00:05:23.522 14:03:50 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:23.522 14:03:50 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:23.522 14:03:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:23.522 14:03:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.522 14:03:50 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:23.522 14:03:50 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:23.522 14:03:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.522 14:03:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.522 14:03:50 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:23.522 14:03:50 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:23.522 14:03:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:26.802 14:03:53 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:26.802 14:03:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:26.802 14:03:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:26.802 14:03:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.802 14:03:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:26.802 14:03:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:26.802 14:03:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:26.802 14:03:53 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:26.802 14:03:53 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:26.802 14:03:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:27.060 14:03:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.060 14:03:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:27.060 14:03:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.060 14:03:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:27.060 14:03:54 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.060 14:03:54 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:27.060 14:03:54 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:27.060 14:03:54 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:27.060 14:03:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:05:29.588 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:05:29.588 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:05:29.588 Found net devices under 0000:81:00.0: mlx_0_0 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:05:29.588 Found net devices under 0000:81:00.1: mlx_0_1 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@58 -- # uname 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:29.588 14:03:56 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:29.589 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:29.589 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:05:29.589 altname enp129s0f0np0 00:05:29.589 inet 192.168.100.8/24 scope global mlx_0_0 00:05:29.589 valid_lft forever preferred_lft forever 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:29.589 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:29.589 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:05:29.589 altname enp129s0f1np1 00:05:29.589 inet 192.168.100.9/24 scope global mlx_0_1 00:05:29.589 valid_lft forever preferred_lft forever 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@422 -- # return 0 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:29.589 192.168.100.9' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:29.589 192.168.100.9' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:29.589 192.168.100.9' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:29.589 14:03:56 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:29.589 14:03:56 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:29.589 14:03:56 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.589 14:03:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.589 MallocForNvmf0 00:05:29.589 14:03:56 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.589 14:03:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.847 MallocForNvmf1 00:05:29.847 14:03:57 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:29.847 14:03:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:30.105 [2024-07-24 14:03:57.347426] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:30.105 [2024-07-24 14:03:57.378380] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x94d810/0x976200) succeed. 00:05:30.105 [2024-07-24 14:03:57.392991] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x94fa00/0x8d6100) succeed. 00:05:30.105 14:03:57 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.105 14:03:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.363 14:03:57 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.363 14:03:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.621 14:03:57 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.621 14:03:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.878 14:03:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:30.878 14:03:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:31.135 [2024-07-24 14:03:58.402247] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:31.135 14:03:58 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:31.135 14:03:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.135 14:03:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.135 14:03:58 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:31.135 14:03:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.135 14:03:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.135 14:03:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:31.135 14:03:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.135 14:03:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.392 MallocBdevForConfigChangeCheck 00:05:31.392 14:03:58 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:31.392 14:03:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.392 14:03:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.392 14:03:58 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:31.392 14:03:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.957 14:03:59 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:31.957 INFO: shutting down applications... 00:05:31.957 14:03:59 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:31.957 14:03:59 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:31.957 14:03:59 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:31.957 14:03:59 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.855 Calling clear_iscsi_subsystem 00:05:33.855 Calling clear_nvmf_subsystem 00:05:33.855 Calling clear_nbd_subsystem 00:05:33.855 Calling clear_ublk_subsystem 00:05:33.855 Calling clear_vhost_blk_subsystem 00:05:33.855 Calling clear_vhost_scsi_subsystem 00:05:33.855 Calling clear_bdev_subsystem 00:05:33.855 14:04:00 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:33.855 14:04:00 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:33.855 14:04:00 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:33.855 14:04:00 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.855 14:04:00 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.855 14:04:00 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.855 14:04:01 json_config -- json_config/json_config.sh@345 -- # break 00:05:33.855 14:04:01 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:33.855 14:04:01 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:33.855 14:04:01 json_config -- json_config/common.sh@31 -- # local app=target 00:05:33.855 14:04:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.855 14:04:01 json_config -- json_config/common.sh@35 -- # [[ -n 4162628 ]] 00:05:33.855 14:04:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4162628 00:05:33.855 14:04:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.855 14:04:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.855 14:04:01 json_config -- json_config/common.sh@41 -- # kill -0 4162628 00:05:33.855 14:04:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.444 14:04:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.444 14:04:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.444 14:04:01 json_config -- json_config/common.sh@41 -- # kill -0 4162628 00:05:34.444 14:04:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.444 14:04:01 json_config -- json_config/common.sh@43 -- # break 00:05:34.444 14:04:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.444 14:04:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.444 SPDK target shutdown done 00:05:34.444 14:04:01 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:34.444 INFO: relaunching applications... 00:05:34.444 14:04:01 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.444 14:04:01 json_config -- json_config/common.sh@9 -- # local app=target 00:05:34.444 14:04:01 json_config -- json_config/common.sh@10 -- # shift 00:05:34.444 14:04:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.444 14:04:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.444 14:04:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.444 14:04:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.444 14:04:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.444 14:04:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4165764 00:05:34.444 14:04:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.444 14:04:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.444 Waiting for target to run... 00:05:34.444 14:04:01 json_config -- json_config/common.sh@25 -- # waitforlisten 4165764 /var/tmp/spdk_tgt.sock 00:05:34.444 14:04:01 json_config -- common/autotest_common.sh@827 -- # '[' -z 4165764 ']' 00:05:34.444 14:04:01 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.445 14:04:01 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.445 14:04:01 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.445 14:04:01 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.445 14:04:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.445 [2024-07-24 14:04:01.653610] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:34.445 [2024-07-24 14:04:01.653701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165764 ] 00:05:34.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.020 [2024-07-24 14:04:02.176713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.020 [2024-07-24 14:04:02.258031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.297 [2024-07-24 14:04:05.325969] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1521500/0x154de80) succeed. 00:05:38.297 [2024-07-24 14:04:05.340131] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15236f0/0x15ade80) succeed. 00:05:38.297 [2024-07-24 14:04:05.398687] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:38.862 14:04:06 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:38.862 14:04:06 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:38.862 14:04:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:38.862 00:05:38.862 14:04:06 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:38.862 14:04:06 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.862 INFO: Checking if target configuration is the same... 00:05:38.862 14:04:06 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.862 14:04:06 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:38.862 14:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.862 + '[' 2 -ne 2 ']' 00:05:38.862 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.862 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:38.862 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:38.862 +++ basename /dev/fd/62 00:05:38.862 ++ mktemp /tmp/62.XXX 00:05:38.862 + tmp_file_1=/tmp/62.MyH 00:05:38.862 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.862 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.862 + tmp_file_2=/tmp/spdk_tgt_config.json.rP6 00:05:38.862 + ret=0 00:05:38.863 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.120 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.120 + diff -u /tmp/62.MyH /tmp/spdk_tgt_config.json.rP6 00:05:39.120 + echo 'INFO: JSON config files are the same' 00:05:39.120 INFO: JSON config files are the same 00:05:39.120 + rm /tmp/62.MyH /tmp/spdk_tgt_config.json.rP6 00:05:39.120 + exit 0 00:05:39.120 14:04:06 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:39.120 14:04:06 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:39.120 INFO: changing configuration and checking if this can be detected... 00:05:39.120 14:04:06 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.120 14:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.377 14:04:06 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.377 14:04:06 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:39.377 14:04:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.377 + '[' 2 -ne 2 ']' 00:05:39.377 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.377 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:39.377 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:39.377 +++ basename /dev/fd/62 00:05:39.377 ++ mktemp /tmp/62.XXX 00:05:39.377 + tmp_file_1=/tmp/62.wO7 00:05:39.377 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.377 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.377 + tmp_file_2=/tmp/spdk_tgt_config.json.ztZ 00:05:39.377 + ret=0 00:05:39.377 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.943 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.943 + diff -u /tmp/62.wO7 /tmp/spdk_tgt_config.json.ztZ 00:05:39.943 + ret=1 00:05:39.943 + echo '=== Start of file: /tmp/62.wO7 ===' 00:05:39.943 + cat /tmp/62.wO7 00:05:39.943 + echo '=== End of file: /tmp/62.wO7 ===' 00:05:39.943 + echo '' 00:05:39.943 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ztZ ===' 00:05:39.943 + cat /tmp/spdk_tgt_config.json.ztZ 00:05:39.943 + echo '=== End of file: /tmp/spdk_tgt_config.json.ztZ ===' 00:05:39.943 + echo '' 00:05:39.943 + rm /tmp/62.wO7 /tmp/spdk_tgt_config.json.ztZ 00:05:39.943 + exit 1 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:39.943 INFO: configuration change detected. 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@317 -- # [[ -n 4165764 ]] 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.943 14:04:07 json_config -- json_config/json_config.sh@323 -- # killprocess 4165764 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@946 -- # '[' -z 4165764 ']' 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@950 -- # kill -0 4165764 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@951 -- # uname 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4165764 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4165764' 00:05:39.943 killing process with pid 4165764 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@965 -- # kill 4165764 00:05:39.943 14:04:07 json_config -- common/autotest_common.sh@970 -- # wait 4165764 00:05:41.839 14:04:08 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.839 14:04:08 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:41.839 14:04:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.839 14:04:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.839 14:04:08 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:41.839 14:04:08 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:41.839 INFO: Success 00:05:41.839 14:04:08 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:41.839 14:04:08 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:41.839 14:04:08 json_config -- nvmf/common.sh@117 -- # sync 00:05:41.839 14:04:08 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:41.839 14:04:08 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:41.840 14:04:08 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:05:41.840 14:04:08 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:41.840 14:04:08 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:05:41.840 00:05:41.840 real 0m19.167s 00:05:41.840 user 0m21.880s 00:05:41.840 sys 0m3.749s 00:05:41.840 14:04:08 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.840 14:04:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.840 ************************************ 00:05:41.840 END TEST json_config 00:05:41.840 ************************************ 00:05:41.840 14:04:08 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.840 14:04:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.840 14:04:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.840 14:04:08 -- common/autotest_common.sh@10 -- # set +x 00:05:41.840 ************************************ 00:05:41.840 START TEST json_config_extra_key 00:05:41.840 ************************************ 00:05:41.840 14:04:08 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:41.840 14:04:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.840 14:04:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.840 14:04:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.840 14:04:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.840 14:04:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.840 14:04:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.840 14:04:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.840 14:04:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.840 14:04:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.840 INFO: launching applications... 00:05:41.840 14:04:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4166745 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.840 Waiting for target to run... 00:05:41.840 14:04:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4166745 /var/tmp/spdk_tgt.sock 00:05:41.840 14:04:08 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 4166745 ']' 00:05:41.840 14:04:08 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.840 14:04:08 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.840 14:04:08 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.840 14:04:08 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.840 14:04:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.840 [2024-07-24 14:04:09.037904] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:41.840 [2024-07-24 14:04:09.037983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166745 ] 00:05:41.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.407 [2024-07-24 14:04:09.531308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.407 [2024-07-24 14:04:09.610194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.665 14:04:10 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.665 14:04:10 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.665 00:05:42.665 14:04:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.665 INFO: shutting down applications... 00:05:42.665 14:04:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4166745 ]] 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4166745 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4166745 00:05:42.665 14:04:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:43.230 14:04:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:43.230 14:04:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:43.230 14:04:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4166745 00:05:43.230 14:04:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:43.230 14:04:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:43.230 14:04:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:43.230 14:04:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:43.230 SPDK target shutdown done 00:05:43.230 14:04:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:43.230 Success 00:05:43.230 00:05:43.230 real 0m1.592s 00:05:43.230 user 0m1.421s 00:05:43.230 sys 0m0.605s 00:05:43.230 14:04:10 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.230 14:04:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.230 ************************************ 00:05:43.230 END TEST json_config_extra_key 00:05:43.230 ************************************ 00:05:43.230 14:04:10 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.230 14:04:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.230 14:04:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.230 14:04:10 -- common/autotest_common.sh@10 -- # set +x 00:05:43.230 ************************************ 00:05:43.230 START TEST alias_rpc 00:05:43.230 ************************************ 00:05:43.230 14:04:10 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.488 * Looking for test storage... 00:05:43.488 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:43.488 14:04:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.488 14:04:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4167001 00:05:43.488 14:04:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.488 14:04:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4167001 00:05:43.488 14:04:10 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 4167001 ']' 00:05:43.488 14:04:10 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.488 14:04:10 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.488 14:04:10 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.488 14:04:10 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.488 14:04:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.488 [2024-07-24 14:04:10.673127] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:43.488 [2024-07-24 14:04:10.673215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167001 ] 00:05:43.488 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.488 [2024-07-24 14:04:10.741239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.488 [2024-07-24 14:04:10.821846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.746 14:04:11 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.746 14:04:11 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:43.746 14:04:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:44.003 14:04:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4167001 00:05:44.003 14:04:11 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 4167001 ']' 00:05:44.003 14:04:11 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 4167001 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4167001 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4167001' 00:05:44.004 killing process with pid 4167001 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@965 -- # kill 4167001 00:05:44.004 14:04:11 alias_rpc -- common/autotest_common.sh@970 -- # wait 4167001 00:05:44.569 00:05:44.569 real 0m1.198s 00:05:44.569 user 0m1.229s 00:05:44.569 sys 0m0.457s 00:05:44.569 14:04:11 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.569 14:04:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.569 ************************************ 00:05:44.569 END TEST alias_rpc 00:05:44.569 ************************************ 00:05:44.569 14:04:11 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:44.569 14:04:11 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.569 14:04:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.569 14:04:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.569 14:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:44.569 ************************************ 00:05:44.569 START TEST spdkcli_tcp 00:05:44.569 ************************************ 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.569 * Looking for test storage... 00:05:44.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4167240 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.569 14:04:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4167240 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 4167240 ']' 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.569 14:04:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.569 [2024-07-24 14:04:11.922507] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:44.569 [2024-07-24 14:04:11.922597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167240 ] 00:05:44.827 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.827 [2024-07-24 14:04:11.992920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.827 [2024-07-24 14:04:12.085817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.827 [2024-07-24 14:04:12.085841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.084 14:04:12 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:45.085 14:04:12 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:45.085 14:04:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4167246 00:05:45.085 14:04:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.085 14:04:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.343 [ 00:05:45.343 "bdev_malloc_delete", 00:05:45.343 "bdev_malloc_create", 00:05:45.343 "bdev_null_resize", 00:05:45.343 "bdev_null_delete", 00:05:45.343 "bdev_null_create", 00:05:45.343 "bdev_nvme_cuse_unregister", 00:05:45.343 "bdev_nvme_cuse_register", 00:05:45.343 "bdev_opal_new_user", 00:05:45.343 "bdev_opal_set_lock_state", 00:05:45.343 "bdev_opal_delete", 00:05:45.343 "bdev_opal_get_info", 00:05:45.343 "bdev_opal_create", 00:05:45.343 "bdev_nvme_opal_revert", 00:05:45.343 "bdev_nvme_opal_init", 00:05:45.343 "bdev_nvme_send_cmd", 00:05:45.343 "bdev_nvme_get_path_iostat", 00:05:45.343 "bdev_nvme_get_mdns_discovery_info", 00:05:45.343 "bdev_nvme_stop_mdns_discovery", 00:05:45.343 "bdev_nvme_start_mdns_discovery", 00:05:45.343 "bdev_nvme_set_multipath_policy", 00:05:45.343 "bdev_nvme_set_preferred_path", 00:05:45.343 "bdev_nvme_get_io_paths", 00:05:45.343 "bdev_nvme_remove_error_injection", 00:05:45.343 "bdev_nvme_add_error_injection", 00:05:45.343 "bdev_nvme_get_discovery_info", 00:05:45.343 "bdev_nvme_stop_discovery", 00:05:45.343 "bdev_nvme_start_discovery", 00:05:45.343 "bdev_nvme_get_controller_health_info", 00:05:45.343 "bdev_nvme_disable_controller", 00:05:45.343 "bdev_nvme_enable_controller", 00:05:45.343 "bdev_nvme_reset_controller", 00:05:45.343 "bdev_nvme_get_transport_statistics", 00:05:45.343 "bdev_nvme_apply_firmware", 00:05:45.343 "bdev_nvme_detach_controller", 00:05:45.343 "bdev_nvme_get_controllers", 00:05:45.343 "bdev_nvme_attach_controller", 00:05:45.343 "bdev_nvme_set_hotplug", 00:05:45.343 "bdev_nvme_set_options", 00:05:45.343 "bdev_passthru_delete", 00:05:45.343 "bdev_passthru_create", 00:05:45.343 "bdev_lvol_set_parent_bdev", 00:05:45.343 "bdev_lvol_set_parent", 00:05:45.343 "bdev_lvol_check_shallow_copy", 00:05:45.343 "bdev_lvol_start_shallow_copy", 00:05:45.343 "bdev_lvol_grow_lvstore", 00:05:45.343 "bdev_lvol_get_lvols", 00:05:45.343 "bdev_lvol_get_lvstores", 00:05:45.343 "bdev_lvol_delete", 00:05:45.343 "bdev_lvol_set_read_only", 00:05:45.343 "bdev_lvol_resize", 00:05:45.343 "bdev_lvol_decouple_parent", 00:05:45.343 "bdev_lvol_inflate", 00:05:45.343 "bdev_lvol_rename", 00:05:45.343 "bdev_lvol_clone_bdev", 00:05:45.343 "bdev_lvol_clone", 00:05:45.343 "bdev_lvol_snapshot", 00:05:45.343 "bdev_lvol_create", 00:05:45.343 "bdev_lvol_delete_lvstore", 00:05:45.343 "bdev_lvol_rename_lvstore", 00:05:45.343 "bdev_lvol_create_lvstore", 00:05:45.343 "bdev_raid_set_options", 00:05:45.343 "bdev_raid_remove_base_bdev", 00:05:45.343 "bdev_raid_add_base_bdev", 00:05:45.343 "bdev_raid_delete", 00:05:45.343 "bdev_raid_create", 00:05:45.343 "bdev_raid_get_bdevs", 00:05:45.343 "bdev_error_inject_error", 00:05:45.343 "bdev_error_delete", 00:05:45.343 "bdev_error_create", 00:05:45.343 "bdev_split_delete", 00:05:45.343 "bdev_split_create", 00:05:45.343 "bdev_delay_delete", 00:05:45.343 "bdev_delay_create", 00:05:45.343 "bdev_delay_update_latency", 00:05:45.343 "bdev_zone_block_delete", 00:05:45.343 "bdev_zone_block_create", 00:05:45.343 "blobfs_create", 00:05:45.343 "blobfs_detect", 00:05:45.343 "blobfs_set_cache_size", 00:05:45.343 "bdev_aio_delete", 00:05:45.343 "bdev_aio_rescan", 00:05:45.343 "bdev_aio_create", 00:05:45.343 "bdev_ftl_set_property", 00:05:45.343 "bdev_ftl_get_properties", 00:05:45.343 "bdev_ftl_get_stats", 00:05:45.343 "bdev_ftl_unmap", 00:05:45.343 "bdev_ftl_unload", 00:05:45.343 "bdev_ftl_delete", 00:05:45.343 "bdev_ftl_load", 00:05:45.343 "bdev_ftl_create", 00:05:45.343 "bdev_virtio_attach_controller", 00:05:45.343 "bdev_virtio_scsi_get_devices", 00:05:45.343 "bdev_virtio_detach_controller", 00:05:45.343 "bdev_virtio_blk_set_hotplug", 00:05:45.343 "bdev_iscsi_delete", 00:05:45.343 "bdev_iscsi_create", 00:05:45.343 "bdev_iscsi_set_options", 00:05:45.343 "accel_error_inject_error", 00:05:45.343 "ioat_scan_accel_module", 00:05:45.343 "dsa_scan_accel_module", 00:05:45.343 "iaa_scan_accel_module", 00:05:45.343 "keyring_file_remove_key", 00:05:45.343 "keyring_file_add_key", 00:05:45.343 "keyring_linux_set_options", 00:05:45.343 "iscsi_get_histogram", 00:05:45.343 "iscsi_enable_histogram", 00:05:45.343 "iscsi_set_options", 00:05:45.343 "iscsi_get_auth_groups", 00:05:45.343 "iscsi_auth_group_remove_secret", 00:05:45.343 "iscsi_auth_group_add_secret", 00:05:45.343 "iscsi_delete_auth_group", 00:05:45.343 "iscsi_create_auth_group", 00:05:45.343 "iscsi_set_discovery_auth", 00:05:45.343 "iscsi_get_options", 00:05:45.343 "iscsi_target_node_request_logout", 00:05:45.343 "iscsi_target_node_set_redirect", 00:05:45.343 "iscsi_target_node_set_auth", 00:05:45.343 "iscsi_target_node_add_lun", 00:05:45.343 "iscsi_get_stats", 00:05:45.343 "iscsi_get_connections", 00:05:45.343 "iscsi_portal_group_set_auth", 00:05:45.343 "iscsi_start_portal_group", 00:05:45.343 "iscsi_delete_portal_group", 00:05:45.343 "iscsi_create_portal_group", 00:05:45.343 "iscsi_get_portal_groups", 00:05:45.343 "iscsi_delete_target_node", 00:05:45.343 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.343 "iscsi_target_node_add_pg_ig_maps", 00:05:45.343 "iscsi_create_target_node", 00:05:45.343 "iscsi_get_target_nodes", 00:05:45.343 "iscsi_delete_initiator_group", 00:05:45.343 "iscsi_initiator_group_remove_initiators", 00:05:45.343 "iscsi_initiator_group_add_initiators", 00:05:45.343 "iscsi_create_initiator_group", 00:05:45.343 "iscsi_get_initiator_groups", 00:05:45.343 "nvmf_set_crdt", 00:05:45.343 "nvmf_set_config", 00:05:45.343 "nvmf_set_max_subsystems", 00:05:45.343 "nvmf_stop_mdns_prr", 00:05:45.343 "nvmf_publish_mdns_prr", 00:05:45.343 "nvmf_subsystem_get_listeners", 00:05:45.343 "nvmf_subsystem_get_qpairs", 00:05:45.343 "nvmf_subsystem_get_controllers", 00:05:45.343 "nvmf_get_stats", 00:05:45.343 "nvmf_get_transports", 00:05:45.343 "nvmf_create_transport", 00:05:45.343 "nvmf_get_targets", 00:05:45.343 "nvmf_delete_target", 00:05:45.343 "nvmf_create_target", 00:05:45.343 "nvmf_subsystem_allow_any_host", 00:05:45.343 "nvmf_subsystem_remove_host", 00:05:45.343 "nvmf_subsystem_add_host", 00:05:45.343 "nvmf_ns_remove_host", 00:05:45.343 "nvmf_ns_add_host", 00:05:45.343 "nvmf_subsystem_remove_ns", 00:05:45.343 "nvmf_subsystem_add_ns", 00:05:45.343 "nvmf_subsystem_listener_set_ana_state", 00:05:45.343 "nvmf_discovery_get_referrals", 00:05:45.343 "nvmf_discovery_remove_referral", 00:05:45.343 "nvmf_discovery_add_referral", 00:05:45.343 "nvmf_subsystem_remove_listener", 00:05:45.343 "nvmf_subsystem_add_listener", 00:05:45.343 "nvmf_delete_subsystem", 00:05:45.343 "nvmf_create_subsystem", 00:05:45.343 "nvmf_get_subsystems", 00:05:45.343 "env_dpdk_get_mem_stats", 00:05:45.343 "nbd_get_disks", 00:05:45.343 "nbd_stop_disk", 00:05:45.343 "nbd_start_disk", 00:05:45.343 "ublk_recover_disk", 00:05:45.343 "ublk_get_disks", 00:05:45.343 "ublk_stop_disk", 00:05:45.343 "ublk_start_disk", 00:05:45.343 "ublk_destroy_target", 00:05:45.343 "ublk_create_target", 00:05:45.343 "virtio_blk_create_transport", 00:05:45.343 "virtio_blk_get_transports", 00:05:45.344 "vhost_controller_set_coalescing", 00:05:45.344 "vhost_get_controllers", 00:05:45.344 "vhost_delete_controller", 00:05:45.344 "vhost_create_blk_controller", 00:05:45.344 "vhost_scsi_controller_remove_target", 00:05:45.344 "vhost_scsi_controller_add_target", 00:05:45.344 "vhost_start_scsi_controller", 00:05:45.344 "vhost_create_scsi_controller", 00:05:45.344 "thread_set_cpumask", 00:05:45.344 "framework_get_scheduler", 00:05:45.344 "framework_set_scheduler", 00:05:45.344 "framework_get_reactors", 00:05:45.344 "thread_get_io_channels", 00:05:45.344 "thread_get_pollers", 00:05:45.344 "thread_get_stats", 00:05:45.344 "framework_monitor_context_switch", 00:05:45.344 "spdk_kill_instance", 00:05:45.344 "log_enable_timestamps", 00:05:45.344 "log_get_flags", 00:05:45.344 "log_clear_flag", 00:05:45.344 "log_set_flag", 00:05:45.344 "log_get_level", 00:05:45.344 "log_set_level", 00:05:45.344 "log_get_print_level", 00:05:45.344 "log_set_print_level", 00:05:45.344 "framework_enable_cpumask_locks", 00:05:45.344 "framework_disable_cpumask_locks", 00:05:45.344 "framework_wait_init", 00:05:45.344 "framework_start_init", 00:05:45.344 "scsi_get_devices", 00:05:45.344 "bdev_get_histogram", 00:05:45.344 "bdev_enable_histogram", 00:05:45.344 "bdev_set_qos_limit", 00:05:45.344 "bdev_set_qd_sampling_period", 00:05:45.344 "bdev_get_bdevs", 00:05:45.344 "bdev_reset_iostat", 00:05:45.344 "bdev_get_iostat", 00:05:45.344 "bdev_examine", 00:05:45.344 "bdev_wait_for_examine", 00:05:45.344 "bdev_set_options", 00:05:45.344 "notify_get_notifications", 00:05:45.344 "notify_get_types", 00:05:45.344 "accel_get_stats", 00:05:45.344 "accel_set_options", 00:05:45.344 "accel_set_driver", 00:05:45.344 "accel_crypto_key_destroy", 00:05:45.344 "accel_crypto_keys_get", 00:05:45.344 "accel_crypto_key_create", 00:05:45.344 "accel_assign_opc", 00:05:45.344 "accel_get_module_info", 00:05:45.344 "accel_get_opc_assignments", 00:05:45.344 "vmd_rescan", 00:05:45.344 "vmd_remove_device", 00:05:45.344 "vmd_enable", 00:05:45.344 "sock_get_default_impl", 00:05:45.344 "sock_set_default_impl", 00:05:45.344 "sock_impl_set_options", 00:05:45.344 "sock_impl_get_options", 00:05:45.344 "iobuf_get_stats", 00:05:45.344 "iobuf_set_options", 00:05:45.344 "framework_get_pci_devices", 00:05:45.344 "framework_get_config", 00:05:45.344 "framework_get_subsystems", 00:05:45.344 "trace_get_info", 00:05:45.344 "trace_get_tpoint_group_mask", 00:05:45.344 "trace_disable_tpoint_group", 00:05:45.344 "trace_enable_tpoint_group", 00:05:45.344 "trace_clear_tpoint_mask", 00:05:45.344 "trace_set_tpoint_mask", 00:05:45.344 "keyring_get_keys", 00:05:45.344 "spdk_get_version", 00:05:45.344 "rpc_get_methods" 00:05:45.344 ] 00:05:45.344 14:04:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.344 14:04:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.344 14:04:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4167240 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 4167240 ']' 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 4167240 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4167240 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4167240' 00:05:45.344 killing process with pid 4167240 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 4167240 00:05:45.344 14:04:12 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 4167240 00:05:45.910 00:05:45.910 real 0m1.203s 00:05:45.910 user 0m2.128s 00:05:45.910 sys 0m0.428s 00:05:45.910 14:04:13 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.910 14:04:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.910 ************************************ 00:05:45.910 END TEST spdkcli_tcp 00:05:45.910 ************************************ 00:05:45.910 14:04:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.910 14:04:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:45.910 14:04:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.910 14:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:45.910 ************************************ 00:05:45.910 START TEST dpdk_mem_utility 00:05:45.910 ************************************ 00:05:45.910 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.910 * Looking for test storage... 00:05:45.910 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.910 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.910 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4167440 00:05:45.910 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.910 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4167440 00:05:45.910 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 4167440 ']' 00:05:45.910 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.910 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:45.910 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.910 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:45.910 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.910 [2024-07-24 14:04:13.168000] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:45.910 [2024-07-24 14:04:13.168079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167440 ] 00:05:45.910 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.910 [2024-07-24 14:04:13.232603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.167 [2024-07-24 14:04:13.314810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.425 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:46.425 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:46.425 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.425 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.425 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.425 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.425 { 00:05:46.425 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.425 } 00:05:46.425 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.425 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.425 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:46.425 1 heaps totaling size 814.000000 MiB 00:05:46.425 size: 814.000000 MiB heap id: 0 00:05:46.425 end heaps---------- 00:05:46.425 8 mempools totaling size 598.116089 MiB 00:05:46.425 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.425 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.425 size: 84.521057 MiB name: bdev_io_4167440 00:05:46.425 size: 51.011292 MiB name: evtpool_4167440 00:05:46.425 size: 50.003479 MiB name: msgpool_4167440 00:05:46.425 size: 21.763794 MiB name: PDU_Pool 00:05:46.425 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.425 size: 0.026123 MiB name: Session_Pool 00:05:46.425 end mempools------- 00:05:46.425 6 memzones totaling size 4.142822 MiB 00:05:46.425 size: 1.000366 MiB name: RG_ring_0_4167440 00:05:46.425 size: 1.000366 MiB name: RG_ring_1_4167440 00:05:46.425 size: 1.000366 MiB name: RG_ring_4_4167440 00:05:46.425 size: 1.000366 MiB name: RG_ring_5_4167440 00:05:46.425 size: 0.125366 MiB name: RG_ring_2_4167440 00:05:46.425 size: 0.015991 MiB name: RG_ring_3_4167440 00:05:46.425 end memzones------- 00:05:46.425 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.425 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:46.425 list of free elements. size: 12.519348 MiB 00:05:46.425 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:46.425 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:46.425 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:46.425 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:46.425 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:46.425 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:46.425 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:46.425 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:46.425 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:46.425 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:46.425 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:46.425 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:46.425 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:46.425 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:46.425 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:46.425 list of standard malloc elements. size: 199.218079 MiB 00:05:46.425 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:46.425 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:46.425 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:46.425 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:46.425 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.425 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.425 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:46.425 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.425 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:46.425 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:46.425 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:46.425 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:46.425 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:46.425 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:46.425 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.425 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.425 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:46.425 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:46.425 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:46.425 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:46.425 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:46.426 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:46.426 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:46.426 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:46.426 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:46.426 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:46.426 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:46.426 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:46.426 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:46.426 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:46.426 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:46.426 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:46.426 list of memzone associated elements. size: 602.262573 MiB 00:05:46.426 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:46.426 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.426 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:46.426 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.426 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:46.426 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4167440_0 00:05:46.426 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:46.426 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4167440_0 00:05:46.426 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:46.426 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4167440_0 00:05:46.426 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:46.426 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.426 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:46.426 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.426 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:46.426 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4167440 00:05:46.426 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:46.426 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4167440 00:05:46.426 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.426 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4167440 00:05:46.426 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:46.426 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.426 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:46.426 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.426 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:46.426 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.426 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:46.426 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.426 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:46.426 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4167440 00:05:46.426 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:46.426 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4167440 00:05:46.426 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:46.426 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4167440 00:05:46.426 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:46.426 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4167440 00:05:46.426 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:46.426 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4167440 00:05:46.426 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:46.426 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.426 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:46.426 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.426 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:46.426 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.426 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:46.426 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4167440 00:05:46.426 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:46.426 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.426 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:46.426 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.426 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:46.426 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4167440 00:05:46.426 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:46.426 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.426 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:46.426 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4167440 00:05:46.426 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:46.426 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4167440 00:05:46.426 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:46.426 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.426 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.426 14:04:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4167440 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 4167440 ']' 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 4167440 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4167440 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4167440' 00:05:46.426 killing process with pid 4167440 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 4167440 00:05:46.426 14:04:13 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 4167440 00:05:46.992 00:05:46.992 real 0m1.031s 00:05:46.992 user 0m1.000s 00:05:46.992 sys 0m0.389s 00:05:46.992 14:04:14 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.992 14:04:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.992 ************************************ 00:05:46.992 END TEST dpdk_mem_utility 00:05:46.992 ************************************ 00:05:46.992 14:04:14 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:46.992 14:04:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.992 14:04:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.992 14:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:46.992 ************************************ 00:05:46.992 START TEST event 00:05:46.992 ************************************ 00:05:46.992 14:04:14 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:46.992 * Looking for test storage... 00:05:46.992 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:46.992 14:04:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:46.992 14:04:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.992 14:04:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.992 14:04:14 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:46.992 14:04:14 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.992 14:04:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.992 ************************************ 00:05:46.992 START TEST event_perf 00:05:46.992 ************************************ 00:05:46.992 14:04:14 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.992 Running I/O for 1 seconds...[2024-07-24 14:04:14.248582] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:46.992 [2024-07-24 14:04:14.248648] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167628 ] 00:05:46.992 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.992 [2024-07-24 14:04:14.323590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.250 [2024-07-24 14:04:14.414384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.250 [2024-07-24 14:04:14.414454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.250 [2024-07-24 14:04:14.414546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.250 [2024-07-24 14:04:14.414548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.183 Running I/O for 1 seconds... 00:05:48.183 lcore 0: 233613 00:05:48.183 lcore 1: 233612 00:05:48.183 lcore 2: 233612 00:05:48.183 lcore 3: 233614 00:05:48.183 done. 00:05:48.183 00:05:48.183 real 0m1.263s 00:05:48.183 user 0m4.159s 00:05:48.183 sys 0m0.100s 00:05:48.183 14:04:15 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.183 14:04:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.183 ************************************ 00:05:48.183 END TEST event_perf 00:05:48.183 ************************************ 00:05:48.183 14:04:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:48.183 14:04:15 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:48.183 14:04:15 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.183 14:04:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.183 ************************************ 00:05:48.183 START TEST event_reactor 00:05:48.183 ************************************ 00:05:48.183 14:04:15 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:48.441 [2024-07-24 14:04:15.564293] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:48.441 [2024-07-24 14:04:15.564360] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167785 ] 00:05:48.441 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.442 [2024-07-24 14:04:15.636737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.442 [2024-07-24 14:04:15.725441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.814 test_start 00:05:49.814 oneshot 00:05:49.814 tick 100 00:05:49.814 tick 100 00:05:49.814 tick 250 00:05:49.814 tick 100 00:05:49.814 tick 100 00:05:49.814 tick 250 00:05:49.814 tick 100 00:05:49.814 tick 500 00:05:49.814 tick 100 00:05:49.814 tick 100 00:05:49.814 tick 250 00:05:49.814 tick 100 00:05:49.814 tick 100 00:05:49.814 test_end 00:05:49.814 00:05:49.814 real 0m1.257s 00:05:49.814 user 0m1.160s 00:05:49.814 sys 0m0.093s 00:05:49.814 14:04:16 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.814 14:04:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.814 ************************************ 00:05:49.814 END TEST event_reactor 00:05:49.814 ************************************ 00:05:49.814 14:04:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.814 14:04:16 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:49.814 14:04:16 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.814 14:04:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.814 ************************************ 00:05:49.815 START TEST event_reactor_perf 00:05:49.815 ************************************ 00:05:49.815 14:04:16 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.815 [2024-07-24 14:04:16.872409] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:49.815 [2024-07-24 14:04:16.872472] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167943 ] 00:05:49.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.815 [2024-07-24 14:04:16.945479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.815 [2024-07-24 14:04:17.035673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.777 test_start 00:05:50.777 test_end 00:05:50.778 Performance: 348201 events per second 00:05:50.778 00:05:50.778 real 0m1.253s 00:05:50.778 user 0m1.153s 00:05:50.778 sys 0m0.096s 00:05:50.778 14:04:18 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.778 14:04:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.778 ************************************ 00:05:50.778 END TEST event_reactor_perf 00:05:50.778 ************************************ 00:05:50.778 14:04:18 event -- event/event.sh@49 -- # uname -s 00:05:50.778 14:04:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.778 14:04:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.778 14:04:18 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.778 14:04:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.778 14:04:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.035 ************************************ 00:05:51.035 START TEST event_scheduler 00:05:51.035 ************************************ 00:05:51.035 14:04:18 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.035 * Looking for test storage... 00:05:51.035 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:51.035 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:51.035 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4168130 00:05:51.035 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:51.035 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.035 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4168130 00:05:51.035 14:04:18 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 4168130 ']' 00:05:51.035 14:04:18 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.035 14:04:18 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.035 14:04:18 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.035 14:04:18 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.035 14:04:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.035 [2024-07-24 14:04:18.252611] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:51.035 [2024-07-24 14:04:18.252685] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168130 ] 00:05:51.035 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.035 [2024-07-24 14:04:18.317754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.035 [2024-07-24 14:04:18.402495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.035 [2024-07-24 14:04:18.402552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.035 [2024-07-24 14:04:18.402582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.035 [2024-07-24 14:04:18.402584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:51.292 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 POWER: Env isn't set yet! 00:05:51.292 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:51.292 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:51.292 POWER: Cannot get available frequencies of lcore 0 00:05:51.292 POWER: Attempting to initialise PSTAT power management... 00:05:51.292 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:51.292 POWER: Initialized successfully for lcore 0 power management 00:05:51.292 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:51.292 POWER: Initialized successfully for lcore 1 power management 00:05:51.292 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:51.292 POWER: Initialized successfully for lcore 2 power management 00:05:51.292 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:51.292 POWER: Initialized successfully for lcore 3 power management 00:05:51.292 [2024-07-24 14:04:18.500985] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:51.292 [2024-07-24 14:04:18.501003] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:51.292 [2024-07-24 14:04:18.501013] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.292 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 [2024-07-24 14:04:18.598120] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.292 14:04:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.292 14:04:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 ************************************ 00:05:51.292 START TEST scheduler_create_thread 00:05:51.292 ************************************ 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 2 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 3 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.292 4 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.292 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.549 5 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.549 6 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.549 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.550 7 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.550 8 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.550 9 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.550 10 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.550 14:04:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.920 14:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.920 14:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:52.920 14:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:52.920 14:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.920 14:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 14:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.292 00:05:54.292 real 0m2.618s 00:05:54.292 user 0m0.012s 00:05:54.292 sys 0m0.003s 00:05:54.292 14:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.292 14:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.292 ************************************ 00:05:54.292 END TEST scheduler_create_thread 00:05:54.292 ************************************ 00:05:54.292 14:04:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.292 14:04:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4168130 00:05:54.292 14:04:21 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 4168130 ']' 00:05:54.292 14:04:21 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 4168130 00:05:54.292 14:04:21 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:54.292 14:04:21 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.292 14:04:21 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4168130 00:05:54.293 14:04:21 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:54.293 14:04:21 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:54.293 14:04:21 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4168130' 00:05:54.293 killing process with pid 4168130 00:05:54.293 14:04:21 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 4168130 00:05:54.293 14:04:21 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 4168130 00:05:54.550 [2024-07-24 14:04:21.725504] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:54.550 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:54.550 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:54.550 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:54.550 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:54.550 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:54.550 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:54.550 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:54.550 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:54.807 00:05:54.807 real 0m3.782s 00:05:54.807 user 0m5.760s 00:05:54.807 sys 0m0.334s 00:05:54.807 14:04:21 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.807 14:04:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.807 ************************************ 00:05:54.807 END TEST event_scheduler 00:05:54.807 ************************************ 00:05:54.807 14:04:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:54.807 14:04:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:54.807 14:04:21 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.807 14:04:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.807 14:04:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.807 ************************************ 00:05:54.807 START TEST app_repeat 00:05:54.807 ************************************ 00:05:54.807 14:04:22 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4168703 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.807 14:04:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4168703' 00:05:54.807 Process app_repeat pid: 4168703 00:05:54.808 14:04:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.808 14:04:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:54.808 spdk_app_start Round 0 00:05:54.808 14:04:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4168703 /var/tmp/spdk-nbd.sock 00:05:54.808 14:04:22 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4168703 ']' 00:05:54.808 14:04:22 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.808 14:04:22 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.808 14:04:22 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.808 14:04:22 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.808 14:04:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.808 [2024-07-24 14:04:22.025384] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:05:54.808 [2024-07-24 14:04:22.025441] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168703 ] 00:05:54.808 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.808 [2024-07-24 14:04:22.096221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.065 [2024-07-24 14:04:22.184604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.065 [2024-07-24 14:04:22.184609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.065 14:04:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.065 14:04:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:55.065 14:04:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.323 Malloc0 00:05:55.323 14:04:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.580 Malloc1 00:05:55.580 14:04:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.580 14:04:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.580 14:04:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.580 14:04:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.581 14:04:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.838 /dev/nbd0 00:05:55.838 14:04:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.838 14:04:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.838 1+0 records in 00:05:55.838 1+0 records out 00:05:55.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017066 s, 24.0 MB/s 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:55.838 14:04:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:55.838 14:04:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.838 14:04:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.838 14:04:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.096 /dev/nbd1 00:05:56.096 14:04:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.096 14:04:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.096 1+0 records in 00:05:56.096 1+0 records out 00:05:56.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018507 s, 22.1 MB/s 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:56.096 14:04:23 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:56.096 14:04:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.096 14:04:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.096 14:04:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.096 14:04:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.096 14:04:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.353 14:04:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.353 { 00:05:56.353 "nbd_device": "/dev/nbd0", 00:05:56.353 "bdev_name": "Malloc0" 00:05:56.353 }, 00:05:56.353 { 00:05:56.353 "nbd_device": "/dev/nbd1", 00:05:56.353 "bdev_name": "Malloc1" 00:05:56.353 } 00:05:56.353 ]' 00:05:56.353 14:04:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.353 { 00:05:56.353 "nbd_device": "/dev/nbd0", 00:05:56.353 "bdev_name": "Malloc0" 00:05:56.353 }, 00:05:56.353 { 00:05:56.353 "nbd_device": "/dev/nbd1", 00:05:56.353 "bdev_name": "Malloc1" 00:05:56.353 } 00:05:56.353 ]' 00:05:56.353 14:04:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.353 14:04:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.353 /dev/nbd1' 00:05:56.353 14:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.353 /dev/nbd1' 00:05:56.353 14:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.354 256+0 records in 00:05:56.354 256+0 records out 00:05:56.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521204 s, 201 MB/s 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.354 256+0 records in 00:05:56.354 256+0 records out 00:05:56.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207082 s, 50.6 MB/s 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.354 14:04:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.354 256+0 records in 00:05:56.354 256+0 records out 00:05:56.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249249 s, 42.1 MB/s 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.612 14:04:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.869 14:04:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.127 14:04:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.385 14:04:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.385 14:04:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.642 14:04:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.900 [2024-07-24 14:04:25.060289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.900 [2024-07-24 14:04:25.147029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.900 [2024-07-24 14:04:25.147029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.900 [2024-07-24 14:04:25.202675] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.900 [2024-07-24 14:04:25.202743] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.180 14:04:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.180 14:04:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:01.180 spdk_app_start Round 1 00:06:01.180 14:04:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4168703 /var/tmp/spdk-nbd.sock 00:06:01.180 14:04:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4168703 ']' 00:06:01.180 14:04:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.180 14:04:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.180 14:04:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.180 14:04:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.180 14:04:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.180 14:04:28 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.180 14:04:28 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:01.180 14:04:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.180 Malloc0 00:06:01.180 14:04:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.437 Malloc1 00:06:01.437 14:04:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.437 14:04:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.694 /dev/nbd0 00:06:01.694 14:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.694 14:04:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.694 1+0 records in 00:06:01.694 1+0 records out 00:06:01.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199864 s, 20.5 MB/s 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.694 14:04:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.694 14:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.694 14:04:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.694 14:04:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.952 /dev/nbd1 00:06:01.952 14:04:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.952 14:04:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.952 1+0 records in 00:06:01.952 1+0 records out 00:06:01.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228298 s, 17.9 MB/s 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:01.952 14:04:29 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:01.952 14:04:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.952 14:04:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.952 14:04:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.952 14:04:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.952 14:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.210 { 00:06:02.210 "nbd_device": "/dev/nbd0", 00:06:02.210 "bdev_name": "Malloc0" 00:06:02.210 }, 00:06:02.210 { 00:06:02.210 "nbd_device": "/dev/nbd1", 00:06:02.210 "bdev_name": "Malloc1" 00:06:02.210 } 00:06:02.210 ]' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.210 { 00:06:02.210 "nbd_device": "/dev/nbd0", 00:06:02.210 "bdev_name": "Malloc0" 00:06:02.210 }, 00:06:02.210 { 00:06:02.210 "nbd_device": "/dev/nbd1", 00:06:02.210 "bdev_name": "Malloc1" 00:06:02.210 } 00:06:02.210 ]' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.210 /dev/nbd1' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.210 /dev/nbd1' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.210 256+0 records in 00:06:02.210 256+0 records out 00:06:02.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506957 s, 207 MB/s 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.210 256+0 records in 00:06:02.210 256+0 records out 00:06:02.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236949 s, 44.3 MB/s 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.210 256+0 records in 00:06:02.210 256+0 records out 00:06:02.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231321 s, 45.3 MB/s 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.210 14:04:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.467 14:04:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.725 14:04:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.983 14:04:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.983 14:04:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.241 14:04:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.499 [2024-07-24 14:04:30.810295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.757 [2024-07-24 14:04:30.899698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.757 [2024-07-24 14:04:30.899700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.757 [2024-07-24 14:04:30.960166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.757 [2024-07-24 14:04:30.960260] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.316 14:04:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.316 14:04:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:06.316 spdk_app_start Round 2 00:06:06.316 14:04:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4168703 /var/tmp/spdk-nbd.sock 00:06:06.316 14:04:33 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4168703 ']' 00:06:06.316 14:04:33 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.316 14:04:33 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.316 14:04:33 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.316 14:04:33 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.316 14:04:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.574 14:04:33 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.574 14:04:33 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:06.574 14:04:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.832 Malloc0 00:06:06.832 14:04:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.090 Malloc1 00:06:07.090 14:04:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.090 14:04:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.348 /dev/nbd0 00:06:07.348 14:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.348 14:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.348 1+0 records in 00:06:07.348 1+0 records out 00:06:07.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152523 s, 26.9 MB/s 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:07.348 14:04:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:07.348 14:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.348 14:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.348 14:04:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.606 /dev/nbd1 00:06:07.606 14:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.606 14:04:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.606 1+0 records in 00:06:07.606 1+0 records out 00:06:07.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167276 s, 24.5 MB/s 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:07.606 14:04:34 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:07.606 14:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.606 14:04:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.606 14:04:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.606 14:04:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.606 14:04:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.864 { 00:06:07.864 "nbd_device": "/dev/nbd0", 00:06:07.864 "bdev_name": "Malloc0" 00:06:07.864 }, 00:06:07.864 { 00:06:07.864 "nbd_device": "/dev/nbd1", 00:06:07.864 "bdev_name": "Malloc1" 00:06:07.864 } 00:06:07.864 ]' 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.864 { 00:06:07.864 "nbd_device": "/dev/nbd0", 00:06:07.864 "bdev_name": "Malloc0" 00:06:07.864 }, 00:06:07.864 { 00:06:07.864 "nbd_device": "/dev/nbd1", 00:06:07.864 "bdev_name": "Malloc1" 00:06:07.864 } 00:06:07.864 ]' 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.864 /dev/nbd1' 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.864 /dev/nbd1' 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.864 256+0 records in 00:06:07.864 256+0 records out 00:06:07.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482253 s, 217 MB/s 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.864 256+0 records in 00:06:07.864 256+0 records out 00:06:07.864 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217102 s, 48.3 MB/s 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.864 14:04:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.122 256+0 records in 00:06:08.122 256+0 records out 00:06:08.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236776 s, 44.3 MB/s 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.122 14:04:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.379 14:04:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.637 14:04:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.895 14:04:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.895 14:04:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.153 14:04:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.410 [2024-07-24 14:04:36.572748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.411 [2024-07-24 14:04:36.659928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.411 [2024-07-24 14:04:36.659933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.411 [2024-07-24 14:04:36.721523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.411 [2024-07-24 14:04:36.721613] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:12.690 14:04:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4168703 /var/tmp/spdk-nbd.sock 00:06:12.690 14:04:39 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4168703 ']' 00:06:12.690 14:04:39 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.690 14:04:39 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.690 14:04:39 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.690 14:04:39 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.690 14:04:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:12.691 14:04:39 event.app_repeat -- event/event.sh@39 -- # killprocess 4168703 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 4168703 ']' 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 4168703 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4168703 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4168703' 00:06:12.691 killing process with pid 4168703 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@965 -- # kill 4168703 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@970 -- # wait 4168703 00:06:12.691 spdk_app_start is called in Round 0. 00:06:12.691 Shutdown signal received, stop current app iteration 00:06:12.691 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:12.691 spdk_app_start is called in Round 1. 00:06:12.691 Shutdown signal received, stop current app iteration 00:06:12.691 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:12.691 spdk_app_start is called in Round 2. 00:06:12.691 Shutdown signal received, stop current app iteration 00:06:12.691 Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 reinitialization... 00:06:12.691 spdk_app_start is called in Round 3. 00:06:12.691 Shutdown signal received, stop current app iteration 00:06:12.691 14:04:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:12.691 14:04:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:12.691 00:06:12.691 real 0m17.830s 00:06:12.691 user 0m38.747s 00:06:12.691 sys 0m3.209s 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.691 14:04:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.691 ************************************ 00:06:12.691 END TEST app_repeat 00:06:12.691 ************************************ 00:06:12.691 14:04:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:12.691 14:04:39 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.691 14:04:39 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.691 14:04:39 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.691 14:04:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.691 ************************************ 00:06:12.691 START TEST cpu_locks 00:06:12.691 ************************************ 00:06:12.691 14:04:39 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:12.691 * Looking for test storage... 00:06:12.691 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:12.691 14:04:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.691 14:04:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.691 14:04:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.691 14:04:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.691 14:04:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.691 14:04:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.691 14:04:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.691 ************************************ 00:06:12.691 START TEST default_locks 00:06:12.691 ************************************ 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4171050 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4171050 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 4171050 ']' 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.691 14:04:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.691 [2024-07-24 14:04:39.988872] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:12.691 [2024-07-24 14:04:39.988945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171050 ] 00:06:12.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.949 [2024-07-24 14:04:40.067465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.949 [2024-07-24 14:04:40.152150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.207 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.207 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:13.207 14:04:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4171050 00:06:13.207 14:04:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4171050 00:06:13.207 14:04:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.465 lslocks: write error 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4171050 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 4171050 ']' 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 4171050 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4171050 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4171050' 00:06:13.465 killing process with pid 4171050 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 4171050 00:06:13.465 14:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 4171050 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4171050 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4171050 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4171050 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 4171050 ']' 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.030 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4171050) - No such process 00:06:14.030 ERROR: process (pid: 4171050) is no longer running 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.030 00:06:14.030 real 0m1.217s 00:06:14.030 user 0m1.139s 00:06:14.030 sys 0m0.547s 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.030 14:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.030 ************************************ 00:06:14.030 END TEST default_locks 00:06:14.030 ************************************ 00:06:14.030 14:04:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:14.030 14:04:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:14.030 14:04:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.030 14:04:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.030 ************************************ 00:06:14.030 START TEST default_locks_via_rpc 00:06:14.030 ************************************ 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4171220 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4171220 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4171220 ']' 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.030 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.030 [2024-07-24 14:04:41.251473] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:14.030 [2024-07-24 14:04:41.251564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171220 ] 00:06:14.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.030 [2024-07-24 14:04:41.316947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.288 [2024-07-24 14:04:41.403888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.288 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.288 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:14.288 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.288 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.288 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4171220 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4171220 00:06:14.545 14:04:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4171220 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 4171220 ']' 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 4171220 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4171220 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4171220' 00:06:14.803 killing process with pid 4171220 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 4171220 00:06:14.803 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 4171220 00:06:15.061 00:06:15.061 real 0m1.219s 00:06:15.061 user 0m1.128s 00:06:15.061 sys 0m0.558s 00:06:15.061 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.061 14:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.061 ************************************ 00:06:15.061 END TEST default_locks_via_rpc 00:06:15.061 ************************************ 00:06:15.319 14:04:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.320 14:04:42 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.320 14:04:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.320 14:04:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.320 ************************************ 00:06:15.320 START TEST non_locking_app_on_locked_coremask 00:06:15.320 ************************************ 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4171382 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4171382 /var/tmp/spdk.sock 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4171382 ']' 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.320 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.320 [2024-07-24 14:04:42.524561] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:15.320 [2024-07-24 14:04:42.524649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171382 ] 00:06:15.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.320 [2024-07-24 14:04:42.596241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.320 [2024-07-24 14:04:42.682156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.577 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.577 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:15.577 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4171385 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4171385 /var/tmp/spdk2.sock 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4171385 ']' 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.578 14:04:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.834 [2024-07-24 14:04:42.988233] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:15.834 [2024-07-24 14:04:42.988321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171385 ] 00:06:15.834 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.834 [2024-07-24 14:04:43.096807] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.834 [2024-07-24 14:04:43.096858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.091 [2024-07-24 14:04:43.279136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.658 14:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.658 14:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:16.658 14:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4171382 00:06:16.658 14:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4171382 00:06:16.658 14:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.223 lslocks: write error 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4171382 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4171382 ']' 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4171382 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4171382 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4171382' 00:06:17.223 killing process with pid 4171382 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4171382 00:06:17.223 14:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4171382 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4171385 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4171385 ']' 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4171385 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4171385 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4171385' 00:06:18.157 killing process with pid 4171385 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4171385 00:06:18.157 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4171385 00:06:18.726 00:06:18.726 real 0m3.339s 00:06:18.726 user 0m3.472s 00:06:18.726 sys 0m1.114s 00:06:18.726 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.726 14:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.726 ************************************ 00:06:18.726 END TEST non_locking_app_on_locked_coremask 00:06:18.726 ************************************ 00:06:18.726 14:04:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:18.726 14:04:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.726 14:04:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.726 14:04:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.726 ************************************ 00:06:18.726 START TEST locking_app_on_unlocked_coremask 00:06:18.726 ************************************ 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4171816 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4171816 /var/tmp/spdk.sock 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4171816 ']' 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.726 14:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.726 [2024-07-24 14:04:45.912656] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:18.726 [2024-07-24 14:04:45.912750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171816 ] 00:06:18.726 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.726 [2024-07-24 14:04:45.978527] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.726 [2024-07-24 14:04:45.978569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.726 [2024-07-24 14:04:46.064513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4171825 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4171825 /var/tmp/spdk2.sock 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4171825 ']' 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.984 14:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.242 [2024-07-24 14:04:46.367439] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:19.242 [2024-07-24 14:04:46.367513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171825 ] 00:06:19.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.242 [2024-07-24 14:04:46.470590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.500 [2024-07-24 14:04:46.651576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.066 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.066 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:20.066 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4171825 00:06:20.066 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4171825 00:06:20.066 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.351 lslocks: write error 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4171816 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4171816 ']' 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 4171816 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4171816 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4171816' 00:06:20.351 killing process with pid 4171816 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 4171816 00:06:20.351 14:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 4171816 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4171825 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4171825 ']' 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 4171825 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4171825 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4171825' 00:06:21.285 killing process with pid 4171825 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 4171825 00:06:21.285 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 4171825 00:06:21.851 00:06:21.851 real 0m3.076s 00:06:21.851 user 0m3.216s 00:06:21.851 sys 0m1.020s 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.851 ************************************ 00:06:21.851 END TEST locking_app_on_unlocked_coremask 00:06:21.851 ************************************ 00:06:21.851 14:04:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:21.851 14:04:48 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.851 14:04:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.851 14:04:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.851 ************************************ 00:06:21.851 START TEST locking_app_on_locked_coremask 00:06:21.851 ************************************ 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4172248 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4172248 /var/tmp/spdk.sock 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4172248 ']' 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.851 14:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.851 [2024-07-24 14:04:49.031299] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:21.851 [2024-07-24 14:04:49.031391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172248 ] 00:06:21.851 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.851 [2024-07-24 14:04:49.102758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.851 [2024-07-24 14:04:49.188427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4172261 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4172261 /var/tmp/spdk2.sock 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4172261 /var/tmp/spdk2.sock 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4172261 /var/tmp/spdk2.sock 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4172261 ']' 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.109 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.110 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.110 14:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.367 [2024-07-24 14:04:49.490133] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:22.367 [2024-07-24 14:04:49.490222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172261 ] 00:06:22.367 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.367 [2024-07-24 14:04:49.598544] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4172248 has claimed it. 00:06:22.367 [2024-07-24 14:04:49.598620] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4172261) - No such process 00:06:22.932 ERROR: process (pid: 4172261) is no longer running 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4172248 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4172248 00:06:22.932 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.189 lslocks: write error 00:06:23.189 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4172248 00:06:23.189 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4172248 ']' 00:06:23.189 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4172248 00:06:23.189 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:23.189 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.189 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4172248 00:06:23.447 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.447 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.447 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4172248' 00:06:23.447 killing process with pid 4172248 00:06:23.447 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4172248 00:06:23.447 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4172248 00:06:23.705 00:06:23.705 real 0m2.013s 00:06:23.705 user 0m2.148s 00:06:23.705 sys 0m0.659s 00:06:23.705 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.705 14:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.705 ************************************ 00:06:23.705 END TEST locking_app_on_locked_coremask 00:06:23.705 ************************************ 00:06:23.705 14:04:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.705 14:04:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.705 14:04:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.705 14:04:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.705 ************************************ 00:06:23.705 START TEST locking_overlapped_coremask 00:06:23.705 ************************************ 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4172509 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4172509 /var/tmp/spdk.sock 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 4172509 ']' 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.705 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.963 [2024-07-24 14:04:51.094038] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:23.963 [2024-07-24 14:04:51.094166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172509 ] 00:06:23.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.963 [2024-07-24 14:04:51.162104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.963 [2024-07-24 14:04:51.249426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.963 [2024-07-24 14:04:51.249489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.963 [2024-07-24 14:04:51.249492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4172561 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4172561 /var/tmp/spdk2.sock 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4172561 /var/tmp/spdk2.sock 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4172561 /var/tmp/spdk2.sock 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 4172561 ']' 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.221 14:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.221 [2024-07-24 14:04:51.531852] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:24.221 [2024-07-24 14:04:51.531940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172561 ] 00:06:24.221 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.478 [2024-07-24 14:04:51.632341] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4172509 has claimed it. 00:06:24.478 [2024-07-24 14:04:51.632395] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.043 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4172561) - No such process 00:06:25.043 ERROR: process (pid: 4172561) is no longer running 00:06:25.043 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4172509 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 4172509 ']' 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 4172509 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4172509 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4172509' 00:06:25.044 killing process with pid 4172509 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 4172509 00:06:25.044 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 4172509 00:06:25.303 00:06:25.303 real 0m1.589s 00:06:25.303 user 0m4.290s 00:06:25.303 sys 0m0.439s 00:06:25.303 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.303 14:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.303 ************************************ 00:06:25.303 END TEST locking_overlapped_coremask 00:06:25.303 ************************************ 00:06:25.303 14:04:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:25.303 14:04:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.303 14:04:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.303 14:04:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.562 ************************************ 00:06:25.562 START TEST locking_overlapped_coremask_via_rpc 00:06:25.562 ************************************ 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4172723 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4172723 /var/tmp/spdk.sock 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4172723 ']' 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.562 14:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.562 [2024-07-24 14:04:52.736360] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:25.562 [2024-07-24 14:04:52.736450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172723 ] 00:06:25.562 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.562 [2024-07-24 14:04:52.808997] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.562 [2024-07-24 14:04:52.809035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.562 [2024-07-24 14:04:52.897213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.562 [2024-07-24 14:04:52.897269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.562 [2024-07-24 14:04:52.897273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4172742 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4172742 /var/tmp/spdk2.sock 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4172742 ']' 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.820 14:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.820 [2024-07-24 14:04:53.191158] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:25.820 [2024-07-24 14:04:53.191256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172742 ] 00:06:26.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.078 [2024-07-24 14:04:53.295650] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.078 [2024-07-24 14:04:53.295692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.336 [2024-07-24 14:04:53.471906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.336 [2024-07-24 14:04:53.475846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.336 [2024-07-24 14:04:53.475848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.901 [2024-07-24 14:04:54.142888] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4172723 has claimed it. 00:06:26.901 request: 00:06:26.901 { 00:06:26.901 "method": "framework_enable_cpumask_locks", 00:06:26.901 "req_id": 1 00:06:26.901 } 00:06:26.901 Got JSON-RPC error response 00:06:26.901 response: 00:06:26.901 { 00:06:26.901 "code": -32603, 00:06:26.901 "message": "Failed to claim CPU core: 2" 00:06:26.901 } 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4172723 /var/tmp/spdk.sock 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4172723 ']' 00:06:26.901 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.902 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.902 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.902 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.902 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4172742 /var/tmp/spdk2.sock 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4172742 ']' 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.159 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.417 00:06:27.417 real 0m1.963s 00:06:27.417 user 0m1.009s 00:06:27.417 sys 0m0.172s 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.417 14:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.417 ************************************ 00:06:27.417 END TEST locking_overlapped_coremask_via_rpc 00:06:27.417 ************************************ 00:06:27.417 14:04:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.417 14:04:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4172723 ]] 00:06:27.417 14:04:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4172723 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4172723 ']' 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4172723 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4172723 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4172723' 00:06:27.417 killing process with pid 4172723 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 4172723 00:06:27.417 14:04:54 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 4172723 00:06:27.983 14:04:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4172742 ]] 00:06:27.983 14:04:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4172742 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4172742 ']' 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4172742 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4172742 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4172742' 00:06:27.983 killing process with pid 4172742 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 4172742 00:06:27.983 14:04:55 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 4172742 00:06:28.241 14:04:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.241 14:04:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.241 14:04:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4172723 ]] 00:06:28.241 14:04:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4172723 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4172723 ']' 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4172723 00:06:28.241 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4172723) - No such process 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 4172723 is not found' 00:06:28.241 Process with pid 4172723 is not found 00:06:28.241 14:04:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4172742 ]] 00:06:28.241 14:04:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4172742 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4172742 ']' 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4172742 00:06:28.241 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4172742) - No such process 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 4172742 is not found' 00:06:28.241 Process with pid 4172742 is not found 00:06:28.241 14:04:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.241 00:06:28.241 real 0m15.657s 00:06:28.241 user 0m27.045s 00:06:28.241 sys 0m5.414s 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.241 14:04:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.241 ************************************ 00:06:28.241 END TEST cpu_locks 00:06:28.241 ************************************ 00:06:28.241 00:06:28.241 real 0m41.400s 00:06:28.241 user 1m18.153s 00:06:28.241 sys 0m9.493s 00:06:28.241 14:04:55 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.241 14:04:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.241 ************************************ 00:06:28.241 END TEST event 00:06:28.241 ************************************ 00:06:28.241 14:04:55 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:28.241 14:04:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.241 14:04:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.241 14:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:28.241 ************************************ 00:06:28.241 START TEST thread 00:06:28.241 ************************************ 00:06:28.241 14:04:55 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:28.498 * Looking for test storage... 00:06:28.498 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:28.498 14:04:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.498 14:04:55 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:28.498 14:04:55 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.498 14:04:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.498 ************************************ 00:06:28.498 START TEST thread_poller_perf 00:06:28.498 ************************************ 00:06:28.498 14:04:55 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.498 [2024-07-24 14:04:55.689510] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:28.498 [2024-07-24 14:04:55.689575] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173226 ] 00:06:28.498 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.498 [2024-07-24 14:04:55.756952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.498 [2024-07-24 14:04:55.842301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.498 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.869 ====================================== 00:06:29.869 busy:2713499057 (cyc) 00:06:29.869 total_run_count: 294000 00:06:29.869 tsc_hz: 2700000000 (cyc) 00:06:29.869 ====================================== 00:06:29.869 poller_cost: 9229 (cyc), 3418 (nsec) 00:06:29.869 00:06:29.869 real 0m1.259s 00:06:29.869 user 0m1.171s 00:06:29.869 sys 0m0.082s 00:06:29.869 14:04:56 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.869 14:04:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.869 ************************************ 00:06:29.869 END TEST thread_poller_perf 00:06:29.869 ************************************ 00:06:29.869 14:04:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.869 14:04:56 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:29.869 14:04:56 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.869 14:04:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.869 ************************************ 00:06:29.869 START TEST thread_poller_perf 00:06:29.869 ************************************ 00:06:29.869 14:04:56 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.869 [2024-07-24 14:04:56.991271] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:29.869 [2024-07-24 14:04:56.991335] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173386 ] 00:06:29.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.869 [2024-07-24 14:04:57.057136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.869 [2024-07-24 14:04:57.145518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.869 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:31.240 ====================================== 00:06:31.240 busy:2702610941 (cyc) 00:06:31.240 total_run_count: 3856000 00:06:31.240 tsc_hz: 2700000000 (cyc) 00:06:31.240 ====================================== 00:06:31.240 poller_cost: 700 (cyc), 259 (nsec) 00:06:31.240 00:06:31.240 real 0m1.243s 00:06:31.240 user 0m1.150s 00:06:31.240 sys 0m0.087s 00:06:31.240 14:04:58 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.240 14:04:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.240 ************************************ 00:06:31.240 END TEST thread_poller_perf 00:06:31.240 ************************************ 00:06:31.240 14:04:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.240 00:06:31.240 real 0m2.644s 00:06:31.240 user 0m2.381s 00:06:31.240 sys 0m0.261s 00:06:31.240 14:04:58 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.240 14:04:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.240 ************************************ 00:06:31.240 END TEST thread 00:06:31.240 ************************************ 00:06:31.240 14:04:58 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:31.240 14:04:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.240 14:04:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.240 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:06:31.240 ************************************ 00:06:31.240 START TEST accel 00:06:31.240 ************************************ 00:06:31.240 14:04:58 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:31.241 * Looking for test storage... 00:06:31.241 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:31.241 14:04:58 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:31.241 14:04:58 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:31.241 14:04:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.241 14:04:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4173587 00:06:31.241 14:04:58 accel -- accel/accel.sh@63 -- # waitforlisten 4173587 00:06:31.241 14:04:58 accel -- common/autotest_common.sh@827 -- # '[' -z 4173587 ']' 00:06:31.241 14:04:58 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.241 14:04:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:31.241 14:04:58 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.241 14:04:58 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.241 14:04:58 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.241 14:04:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.241 14:04:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:31.241 14:04:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.241 14:04:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.241 14:04:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.241 14:04:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.241 14:04:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.241 14:04:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:31.241 14:04:58 accel -- accel/accel.sh@41 -- # jq -r . 00:06:31.241 [2024-07-24 14:04:58.390873] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:31.241 [2024-07-24 14:04:58.390965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173587 ] 00:06:31.241 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.241 [2024-07-24 14:04:58.456129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.241 [2024-07-24 14:04:58.538111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.498 14:04:58 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:31.498 14:04:58 accel -- common/autotest_common.sh@860 -- # return 0 00:06:31.498 14:04:58 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:31.498 14:04:58 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:31.498 14:04:58 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:31.498 14:04:58 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:31.498 14:04:58 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:31.498 14:04:58 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:31.498 14:04:58 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.498 14:04:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.498 14:04:58 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:31.498 14:04:58 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.498 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.498 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.498 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.498 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.498 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.498 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # IFS== 00:06:31.499 14:04:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:31.499 14:04:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.499 14:04:58 accel -- accel/accel.sh@75 -- # killprocess 4173587 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@946 -- # '[' -z 4173587 ']' 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@950 -- # kill -0 4173587 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@951 -- # uname 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4173587 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4173587' 00:06:31.499 killing process with pid 4173587 00:06:31.499 14:04:58 accel -- common/autotest_common.sh@965 -- # kill 4173587 00:06:31.756 14:04:58 accel -- common/autotest_common.sh@970 -- # wait 4173587 00:06:32.013 14:04:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:32.013 14:04:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:32.013 14:04:59 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:32.013 14:04:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.013 14:04:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.013 14:04:59 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:32.013 14:04:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:32.013 14:04:59 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.013 14:04:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:32.013 14:04:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:32.013 14:04:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:32.013 14:04:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.013 14:04:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.013 ************************************ 00:06:32.013 START TEST accel_missing_filename 00:06:32.013 ************************************ 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.013 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:32.013 14:04:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:32.013 [2024-07-24 14:04:59.373675] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:32.013 [2024-07-24 14:04:59.373742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173753 ] 00:06:32.271 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.271 [2024-07-24 14:04:59.442816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.271 [2024-07-24 14:04:59.529084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.271 [2024-07-24 14:04:59.586747] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.528 [2024-07-24 14:04:59.669809] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:32.528 A filename is required. 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.528 00:06:32.528 real 0m0.394s 00:06:32.528 user 0m0.278s 00:06:32.528 sys 0m0.150s 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.528 14:04:59 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:32.528 ************************************ 00:06:32.528 END TEST accel_missing_filename 00:06:32.528 ************************************ 00:06:32.528 14:04:59 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:32.528 14:04:59 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:32.528 14:04:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.528 14:04:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.528 ************************************ 00:06:32.528 START TEST accel_compress_verify 00:06:32.528 ************************************ 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.528 14:04:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:32.528 14:04:59 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:32.528 [2024-07-24 14:04:59.820464] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:32.529 [2024-07-24 14:04:59.820527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173778 ] 00:06:32.529 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.529 [2024-07-24 14:04:59.892416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.786 [2024-07-24 14:04:59.983847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.786 [2024-07-24 14:05:00.042965] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.786 [2024-07-24 14:05:00.125325] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:33.044 00:06:33.044 Compression does not support the verify option, aborting. 00:06:33.044 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:33.044 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.044 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:33.045 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:33.045 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:33.045 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.045 00:06:33.045 real 0m0.410s 00:06:33.045 user 0m0.285s 00:06:33.045 sys 0m0.159s 00:06:33.045 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.045 14:05:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:33.045 ************************************ 00:06:33.045 END TEST accel_compress_verify 00:06:33.045 ************************************ 00:06:33.045 14:05:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.045 ************************************ 00:06:33.045 START TEST accel_wrong_workload 00:06:33.045 ************************************ 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:33.045 14:05:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:33.045 Unsupported workload type: foobar 00:06:33.045 [2024-07-24 14:05:00.274783] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:33.045 accel_perf options: 00:06:33.045 [-h help message] 00:06:33.045 [-q queue depth per core] 00:06:33.045 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:33.045 [-T number of threads per core 00:06:33.045 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:33.045 [-t time in seconds] 00:06:33.045 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:33.045 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:33.045 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:33.045 [-l for compress/decompress workloads, name of uncompressed input file 00:06:33.045 [-S for crc32c workload, use this seed value (default 0) 00:06:33.045 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:33.045 [-f for fill workload, use this BYTE value (default 255) 00:06:33.045 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:33.045 [-y verify result if this switch is on] 00:06:33.045 [-a tasks to allocate per core (default: same value as -q)] 00:06:33.045 Can be used to spread operations across a wider range of memory. 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.045 00:06:33.045 real 0m0.022s 00:06:33.045 user 0m0.012s 00:06:33.045 sys 0m0.010s 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.045 14:05:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:33.045 ************************************ 00:06:33.045 END TEST accel_wrong_workload 00:06:33.045 ************************************ 00:06:33.045 Error: writing output failed: Broken pipe 00:06:33.045 14:05:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.045 ************************************ 00:06:33.045 START TEST accel_negative_buffers 00:06:33.045 ************************************ 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:33.045 14:05:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:33.045 -x option must be non-negative. 00:06:33.045 [2024-07-24 14:05:00.343843] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:33.045 accel_perf options: 00:06:33.045 [-h help message] 00:06:33.045 [-q queue depth per core] 00:06:33.045 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:33.045 [-T number of threads per core 00:06:33.045 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:33.045 [-t time in seconds] 00:06:33.045 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:33.045 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:33.045 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:33.045 [-l for compress/decompress workloads, name of uncompressed input file 00:06:33.045 [-S for crc32c workload, use this seed value (default 0) 00:06:33.045 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:33.045 [-f for fill workload, use this BYTE value (default 255) 00:06:33.045 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:33.045 [-y verify result if this switch is on] 00:06:33.045 [-a tasks to allocate per core (default: same value as -q)] 00:06:33.045 Can be used to spread operations across a wider range of memory. 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.045 00:06:33.045 real 0m0.023s 00:06:33.045 user 0m0.016s 00:06:33.045 sys 0m0.008s 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.045 14:05:00 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:33.045 ************************************ 00:06:33.045 END TEST accel_negative_buffers 00:06:33.045 ************************************ 00:06:33.045 Error: writing output failed: Broken pipe 00:06:33.045 14:05:00 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.045 14:05:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.045 ************************************ 00:06:33.045 START TEST accel_crc32c 00:06:33.045 ************************************ 00:06:33.045 14:05:00 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:33.045 14:05:00 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:33.045 14:05:00 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:33.045 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.045 14:05:00 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:33.045 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.045 14:05:00 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:33.045 14:05:00 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:33.046 14:05:00 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.046 14:05:00 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.046 14:05:00 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.046 14:05:00 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.046 14:05:00 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.046 14:05:00 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:33.046 14:05:00 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:33.046 [2024-07-24 14:05:00.409868] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:33.046 [2024-07-24 14:05:00.409933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173962 ] 00:06:33.303 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.303 [2024-07-24 14:05:00.480031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.304 [2024-07-24 14:05:00.570404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:33.304 14:05:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:34.720 14:05:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.720 00:06:34.720 real 0m1.410s 00:06:34.720 user 0m1.269s 00:06:34.720 sys 0m0.149s 00:06:34.720 14:05:01 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.720 14:05:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:34.720 ************************************ 00:06:34.720 END TEST accel_crc32c 00:06:34.720 ************************************ 00:06:34.720 14:05:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:34.720 14:05:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:34.720 14:05:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.720 14:05:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.720 ************************************ 00:06:34.720 START TEST accel_crc32c_C2 00:06:34.720 ************************************ 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.720 14:05:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:34.720 [2024-07-24 14:05:01.866612] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:34.721 [2024-07-24 14:05:01.866675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174122 ] 00:06:34.721 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.721 [2024-07-24 14:05:01.939172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.721 [2024-07-24 14:05:02.029928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.978 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:34.979 14:05:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.911 00:06:35.911 real 0m1.420s 00:06:35.911 user 0m1.278s 00:06:35.911 sys 0m0.150s 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.911 14:05:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:35.911 ************************************ 00:06:35.911 END TEST accel_crc32c_C2 00:06:35.911 ************************************ 00:06:36.169 14:05:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:36.169 14:05:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:36.169 14:05:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.169 14:05:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.169 ************************************ 00:06:36.169 START TEST accel_copy 00:06:36.169 ************************************ 00:06:36.169 14:05:03 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:36.169 14:05:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:36.169 [2024-07-24 14:05:03.335440] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:36.169 [2024-07-24 14:05:03.335500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174281 ] 00:06:36.169 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.169 [2024-07-24 14:05:03.408643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.169 [2024-07-24 14:05:03.502065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:36.427 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.428 14:05:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:37.801 14:05:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.801 00:06:37.801 real 0m1.421s 00:06:37.801 user 0m1.264s 00:06:37.801 sys 0m0.159s 00:06:37.801 14:05:04 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.801 14:05:04 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:37.801 ************************************ 00:06:37.801 END TEST accel_copy 00:06:37.801 ************************************ 00:06:37.801 14:05:04 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.801 14:05:04 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:37.801 14:05:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.801 14:05:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.801 ************************************ 00:06:37.801 START TEST accel_fill 00:06:37.801 ************************************ 00:06:37.801 14:05:04 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:37.801 14:05:04 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:37.801 [2024-07-24 14:05:04.799500] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:37.801 [2024-07-24 14:05:04.799562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174547 ] 00:06:37.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.801 [2024-07-24 14:05:04.869630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.801 [2024-07-24 14:05:04.962058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.801 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:37.802 14:05:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:39.174 14:05:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.174 00:06:39.174 real 0m1.404s 00:06:39.174 user 0m1.263s 00:06:39.174 sys 0m0.143s 00:06:39.174 14:05:06 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.174 14:05:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:39.174 ************************************ 00:06:39.174 END TEST accel_fill 00:06:39.174 ************************************ 00:06:39.174 14:05:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:39.175 14:05:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:39.175 14:05:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.175 14:05:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.175 ************************************ 00:06:39.175 START TEST accel_copy_crc32c 00:06:39.175 ************************************ 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:39.175 [2024-07-24 14:05:06.243906] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:39.175 [2024-07-24 14:05:06.243967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174708 ] 00:06:39.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.175 [2024-07-24 14:05:06.315909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.175 [2024-07-24 14:05:06.407807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.175 14:05:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.548 00:06:40.548 real 0m1.418s 00:06:40.548 user 0m1.267s 00:06:40.548 sys 0m0.154s 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.548 14:05:07 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 ************************************ 00:06:40.548 END TEST accel_copy_crc32c 00:06:40.548 ************************************ 00:06:40.548 14:05:07 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.548 14:05:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:40.548 14:05:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.548 14:05:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.548 ************************************ 00:06:40.548 START TEST accel_copy_crc32c_C2 00:06:40.548 ************************************ 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.548 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:40.548 [2024-07-24 14:05:07.704729] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:40.548 [2024-07-24 14:05:07.704843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4174867 ] 00:06:40.548 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.548 [2024-07-24 14:05:07.774466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.548 [2024-07-24 14:05:07.867122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.806 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:40.807 14:05:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.739 00:06:41.739 real 0m1.414s 00:06:41.739 user 0m1.268s 00:06:41.739 sys 0m0.149s 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.739 14:05:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:41.739 ************************************ 00:06:41.739 END TEST accel_copy_crc32c_C2 00:06:41.739 ************************************ 00:06:41.997 14:05:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:41.997 14:05:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:41.997 14:05:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.997 14:05:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.997 ************************************ 00:06:41.997 START TEST accel_dualcast 00:06:41.997 ************************************ 00:06:41.997 14:05:09 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:41.997 14:05:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:41.997 [2024-07-24 14:05:09.159118] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:41.997 [2024-07-24 14:05:09.159191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175085 ] 00:06:41.997 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.998 [2024-07-24 14:05:09.230113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.998 [2024-07-24 14:05:09.322408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.255 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.256 14:05:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:43.189 14:05:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.189 00:06:43.189 real 0m1.410s 00:06:43.189 user 0m1.265s 00:06:43.189 sys 0m0.146s 00:06:43.189 14:05:10 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.189 14:05:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:43.189 ************************************ 00:06:43.189 END TEST accel_dualcast 00:06:43.189 ************************************ 00:06:43.447 14:05:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:43.447 14:05:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:43.447 14:05:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.447 14:05:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.447 ************************************ 00:06:43.447 START TEST accel_compare 00:06:43.447 ************************************ 00:06:43.447 14:05:10 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:43.447 14:05:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:43.447 [2024-07-24 14:05:10.613504] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:43.447 [2024-07-24 14:05:10.613570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175292 ] 00:06:43.447 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.447 [2024-07-24 14:05:10.686845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.448 [2024-07-24 14:05:10.779840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:43.706 14:05:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:44.639 14:05:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.639 00:06:44.639 real 0m1.398s 00:06:44.639 user 0m1.244s 00:06:44.639 sys 0m0.155s 00:06:44.639 14:05:11 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.639 14:05:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:44.639 ************************************ 00:06:44.639 END TEST accel_compare 00:06:44.639 ************************************ 00:06:44.897 14:05:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:44.897 14:05:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:44.897 14:05:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.897 14:05:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.897 ************************************ 00:06:44.897 START TEST accel_xor 00:06:44.897 ************************************ 00:06:44.897 14:05:12 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:44.897 14:05:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:44.897 [2024-07-24 14:05:12.061112] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:44.897 [2024-07-24 14:05:12.061180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175455 ] 00:06:44.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.897 [2024-07-24 14:05:12.135553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.897 [2024-07-24 14:05:12.226159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:45.155 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.156 14:05:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:46.088 14:05:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.088 00:06:46.088 real 0m1.414s 00:06:46.088 user 0m1.261s 00:06:46.088 sys 0m0.155s 00:06:46.088 14:05:13 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.088 14:05:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:46.088 ************************************ 00:06:46.088 END TEST accel_xor 00:06:46.088 ************************************ 00:06:46.346 14:05:13 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:46.346 14:05:13 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:46.346 14:05:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.346 14:05:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.346 ************************************ 00:06:46.346 START TEST accel_xor 00:06:46.346 ************************************ 00:06:46.346 14:05:13 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:46.346 14:05:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:46.346 [2024-07-24 14:05:13.519494] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:46.346 [2024-07-24 14:05:13.519559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175612 ] 00:06:46.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.346 [2024-07-24 14:05:13.590000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.346 [2024-07-24 14:05:13.681995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.604 14:05:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:47.537 14:05:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.537 00:06:47.537 real 0m1.395s 00:06:47.537 user 0m1.253s 00:06:47.537 sys 0m0.145s 00:06:47.537 14:05:14 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.537 14:05:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:47.537 ************************************ 00:06:47.537 END TEST accel_xor 00:06:47.537 ************************************ 00:06:47.796 14:05:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:47.796 14:05:14 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:47.796 14:05:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.796 14:05:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.796 ************************************ 00:06:47.796 START TEST accel_dif_verify 00:06:47.796 ************************************ 00:06:47.796 14:05:14 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:47.796 14:05:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:47.796 [2024-07-24 14:05:14.953857] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:47.796 [2024-07-24 14:05:14.953916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175880 ] 00:06:47.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.796 [2024-07-24 14:05:15.024956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.796 [2024-07-24 14:05:15.116881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.054 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.055 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.055 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.055 14:05:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.055 14:05:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.055 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.055 14:05:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:49.019 14:05:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.019 00:06:49.019 real 0m1.411s 00:06:49.019 user 0m1.256s 00:06:49.019 sys 0m0.159s 00:06:49.019 14:05:16 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.019 14:05:16 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.019 ************************************ 00:06:49.019 END TEST accel_dif_verify 00:06:49.019 ************************************ 00:06:49.280 14:05:16 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:49.280 14:05:16 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:49.280 14:05:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.280 14:05:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.280 ************************************ 00:06:49.280 START TEST accel_dif_generate 00:06:49.280 ************************************ 00:06:49.280 14:05:16 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:49.280 [2024-07-24 14:05:16.413156] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:49.280 [2024-07-24 14:05:16.413223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176036 ] 00:06:49.280 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.280 [2024-07-24 14:05:16.485183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.280 [2024-07-24 14:05:16.578932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.280 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.281 14:05:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.653 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.654 14:05:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.654 14:05:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.654 14:05:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:50.654 14:05:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.654 00:06:50.654 real 0m1.404s 00:06:50.654 user 0m1.259s 00:06:50.654 sys 0m0.147s 00:06:50.654 14:05:17 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.654 14:05:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:50.654 ************************************ 00:06:50.654 END TEST accel_dif_generate 00:06:50.654 ************************************ 00:06:50.654 14:05:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:50.654 14:05:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:50.654 14:05:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.654 14:05:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.654 ************************************ 00:06:50.654 START TEST accel_dif_generate_copy 00:06:50.654 ************************************ 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:50.654 14:05:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:50.654 [2024-07-24 14:05:17.859525] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:50.654 [2024-07-24 14:05:17.859589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176203 ] 00:06:50.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.654 [2024-07-24 14:05:17.930462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.654 [2024-07-24 14:05:18.022507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.912 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.913 14:05:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.287 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.288 00:06:52.288 real 0m1.419s 00:06:52.288 user 0m1.269s 00:06:52.288 sys 0m0.152s 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.288 14:05:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.288 ************************************ 00:06:52.288 END TEST accel_dif_generate_copy 00:06:52.288 ************************************ 00:06:52.288 14:05:19 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:52.288 14:05:19 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:52.288 14:05:19 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:52.288 14:05:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.288 14:05:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.288 ************************************ 00:06:52.288 START TEST accel_comp 00:06:52.288 ************************************ 00:06:52.288 14:05:19 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:52.288 [2024-07-24 14:05:19.322905] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:52.288 [2024-07-24 14:05:19.322968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176361 ] 00:06:52.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.288 [2024-07-24 14:05:19.393753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.288 [2024-07-24 14:05:19.485347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.288 14:05:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.661 14:05:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:53.662 14:05:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.662 00:06:53.662 real 0m1.412s 00:06:53.662 user 0m1.265s 00:06:53.662 sys 0m0.150s 00:06:53.662 14:05:20 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.662 14:05:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:53.662 ************************************ 00:06:53.662 END TEST accel_comp 00:06:53.662 ************************************ 00:06:53.662 14:05:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:53.662 14:05:20 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:53.662 14:05:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.662 14:05:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.662 ************************************ 00:06:53.662 START TEST accel_decomp 00:06:53.662 ************************************ 00:06:53.662 14:05:20 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:53.662 14:05:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:53.662 [2024-07-24 14:05:20.786481] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:53.662 [2024-07-24 14:05:20.786547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176628 ] 00:06:53.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.662 [2024-07-24 14:05:20.859112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.662 [2024-07-24 14:05:20.949264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.662 14:05:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.034 14:05:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.034 00:06:55.034 real 0m1.407s 00:06:55.034 user 0m1.264s 00:06:55.034 sys 0m0.147s 00:06:55.034 14:05:22 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.034 14:05:22 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:55.034 ************************************ 00:06:55.034 END TEST accel_decomp 00:06:55.034 ************************************ 00:06:55.034 14:05:22 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.034 14:05:22 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:55.034 14:05:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.034 14:05:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.034 ************************************ 00:06:55.034 START TEST accel_decmop_full 00:06:55.034 ************************************ 00:06:55.034 14:05:22 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.034 14:05:22 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:55.035 14:05:22 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:55.035 [2024-07-24 14:05:22.236094] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:55.035 [2024-07-24 14:05:22.236159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176783 ] 00:06:55.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.035 [2024-07-24 14:05:22.303590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.035 [2024-07-24 14:05:22.385412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.293 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.294 14:05:22 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.665 14:05:23 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.665 00:06:56.665 real 0m1.392s 00:06:56.665 user 0m1.253s 00:06:56.665 sys 0m0.140s 00:06:56.665 14:05:23 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.665 14:05:23 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:56.665 ************************************ 00:06:56.665 END TEST accel_decmop_full 00:06:56.665 ************************************ 00:06:56.665 14:05:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.665 14:05:23 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:56.665 14:05:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.665 14:05:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.665 ************************************ 00:06:56.665 START TEST accel_decomp_mcore 00:06:56.665 ************************************ 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:56.665 [2024-07-24 14:05:23.678096] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:56.665 [2024-07-24 14:05:23.678161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176946 ] 00:06:56.665 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.665 [2024-07-24 14:05:23.751388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.665 [2024-07-24 14:05:23.846427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.665 [2024-07-24 14:05:23.846484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.665 [2024-07-24 14:05:23.846601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.665 [2024-07-24 14:05:23.846603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.665 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.666 14:05:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.036 00:06:58.036 real 0m1.416s 00:06:58.036 user 0m4.698s 00:06:58.036 sys 0m0.157s 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.036 14:05:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:58.036 ************************************ 00:06:58.036 END TEST accel_decomp_mcore 00:06:58.036 ************************************ 00:06:58.036 14:05:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.036 14:05:25 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:58.036 14:05:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.036 14:05:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.036 ************************************ 00:06:58.036 START TEST accel_decomp_full_mcore 00:06:58.036 ************************************ 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:58.036 [2024-07-24 14:05:25.138493] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:58.036 [2024-07-24 14:05:25.138559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177200 ] 00:06:58.036 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.036 [2024-07-24 14:05:25.213534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.036 [2024-07-24 14:05:25.309630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.036 [2024-07-24 14:05:25.309697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.036 [2024-07-24 14:05:25.309805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.036 [2024-07-24 14:05:25.309800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.036 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.037 14:05:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.407 00:06:59.407 real 0m1.440s 00:06:59.407 user 0m4.757s 00:06:59.407 sys 0m0.169s 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.407 14:05:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:59.407 ************************************ 00:06:59.407 END TEST accel_decomp_full_mcore 00:06:59.407 ************************************ 00:06:59.407 14:05:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.407 14:05:26 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:59.407 14:05:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.407 14:05:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.407 ************************************ 00:06:59.407 START TEST accel_decomp_mthread 00:06:59.407 ************************************ 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:59.408 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:59.408 [2024-07-24 14:05:26.619295] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:06:59.408 [2024-07-24 14:05:26.619347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177377 ] 00:06:59.408 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.408 [2024-07-24 14:05:26.691088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.666 [2024-07-24 14:05:26.782061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:59.666 14:05:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.040 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.041 00:07:01.041 real 0m1.408s 00:07:01.041 user 0m1.265s 00:07:01.041 sys 0m0.145s 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.041 14:05:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:01.041 ************************************ 00:07:01.041 END TEST accel_decomp_mthread 00:07:01.041 ************************************ 00:07:01.041 14:05:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.041 14:05:28 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:01.041 14:05:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.041 14:05:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.041 ************************************ 00:07:01.041 START TEST accel_decomp_full_mthread 00:07:01.041 ************************************ 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:01.041 [2024-07-24 14:05:28.077422] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:01.041 [2024-07-24 14:05:28.077498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177535 ] 00:07:01.041 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.041 [2024-07-24 14:05:28.147918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.041 [2024-07-24 14:05:28.236884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.041 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.042 14:05:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.415 00:07:02.415 real 0m1.441s 00:07:02.415 user 0m1.277s 00:07:02.415 sys 0m0.167s 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.415 14:05:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:02.415 ************************************ 00:07:02.415 END TEST accel_decomp_full_mthread 00:07:02.415 ************************************ 00:07:02.415 14:05:29 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:02.415 14:05:29 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:02.416 14:05:29 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:02.416 14:05:29 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:02.416 14:05:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.416 14:05:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.416 14:05:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.416 14:05:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.416 14:05:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.416 14:05:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.416 14:05:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.416 14:05:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:02.416 14:05:29 accel -- accel/accel.sh@41 -- # jq -r . 00:07:02.416 ************************************ 00:07:02.416 START TEST accel_dif_functional_tests 00:07:02.416 ************************************ 00:07:02.416 14:05:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:02.416 [2024-07-24 14:05:29.584251] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:02.416 [2024-07-24 14:05:29.584326] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177692 ] 00:07:02.416 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.416 [2024-07-24 14:05:29.657587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.416 [2024-07-24 14:05:29.750940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.416 [2024-07-24 14:05:29.750966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.416 [2024-07-24 14:05:29.750970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.674 00:07:02.674 00:07:02.674 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.674 http://cunit.sourceforge.net/ 00:07:02.674 00:07:02.674 00:07:02.674 Suite: accel_dif 00:07:02.674 Test: verify: DIF generated, GUARD check ...passed 00:07:02.674 Test: verify: DIF generated, APPTAG check ...passed 00:07:02.674 Test: verify: DIF generated, REFTAG check ...passed 00:07:02.674 Test: verify: DIF not generated, GUARD check ...[2024-07-24 14:05:29.843654] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:02.674 passed 00:07:02.674 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 14:05:29.843726] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:02.674 passed 00:07:02.674 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 14:05:29.843758] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:02.674 passed 00:07:02.674 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:02.674 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 14:05:29.843849] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:02.674 passed 00:07:02.674 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:02.674 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:02.674 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:02.674 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 14:05:29.843982] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:02.674 passed 00:07:02.674 Test: verify copy: DIF generated, GUARD check ...passed 00:07:02.674 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:02.674 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:02.674 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 14:05:29.844145] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:02.674 passed 00:07:02.674 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 14:05:29.844179] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:02.674 passed 00:07:02.674 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 14:05:29.844211] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:02.674 passed 00:07:02.674 Test: generate copy: DIF generated, GUARD check ...passed 00:07:02.674 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:02.674 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:02.674 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:02.674 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:02.674 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:02.674 Test: generate copy: iovecs-len validate ...[2024-07-24 14:05:29.844422] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:02.674 passed 00:07:02.674 Test: generate copy: buffer alignment validate ...passed 00:07:02.674 00:07:02.674 Run Summary: Type Total Ran Passed Failed Inactive 00:07:02.674 suites 1 1 n/a 0 0 00:07:02.674 tests 26 26 26 0 0 00:07:02.674 asserts 115 115 115 0 n/a 00:07:02.674 00:07:02.674 Elapsed time = 0.002 seconds 00:07:02.933 00:07:02.933 real 0m0.512s 00:07:02.933 user 0m0.781s 00:07:02.933 sys 0m0.188s 00:07:02.933 14:05:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.933 14:05:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:02.933 ************************************ 00:07:02.933 END TEST accel_dif_functional_tests 00:07:02.933 ************************************ 00:07:02.933 00:07:02.933 real 0m31.790s 00:07:02.933 user 0m35.024s 00:07:02.933 sys 0m4.751s 00:07:02.933 14:05:30 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.933 14:05:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.933 ************************************ 00:07:02.933 END TEST accel 00:07:02.933 ************************************ 00:07:02.933 14:05:30 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:02.933 14:05:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.933 14:05:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.933 14:05:30 -- common/autotest_common.sh@10 -- # set +x 00:07:02.933 ************************************ 00:07:02.933 START TEST accel_rpc 00:07:02.933 ************************************ 00:07:02.933 14:05:30 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:02.933 * Looking for test storage... 00:07:02.933 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:02.933 14:05:30 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:02.933 14:05:30 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4177878 00:07:02.933 14:05:30 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:02.933 14:05:30 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4177878 00:07:02.933 14:05:30 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 4177878 ']' 00:07:02.933 14:05:30 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.933 14:05:30 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.933 14:05:30 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.933 14:05:30 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.933 14:05:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.933 [2024-07-24 14:05:30.230864] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:02.933 [2024-07-24 14:05:30.230953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4177878 ] 00:07:02.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.933 [2024-07-24 14:05:30.300305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.191 [2024-07-24 14:05:30.389563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.191 14:05:30 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.191 14:05:30 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:03.191 14:05:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:03.191 14:05:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:03.191 14:05:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:03.191 14:05:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:03.191 14:05:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:03.191 14:05:30 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.191 14:05:30 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.191 14:05:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.191 ************************************ 00:07:03.191 START TEST accel_assign_opcode 00:07:03.191 ************************************ 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.191 [2024-07-24 14:05:30.474278] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.191 [2024-07-24 14:05:30.482290] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.191 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.450 software 00:07:03.450 00:07:03.450 real 0m0.283s 00:07:03.450 user 0m0.039s 00:07:03.450 sys 0m0.004s 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.450 14:05:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:03.450 ************************************ 00:07:03.450 END TEST accel_assign_opcode 00:07:03.450 ************************************ 00:07:03.450 14:05:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4177878 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 4177878 ']' 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 4177878 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4177878 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4177878' 00:07:03.450 killing process with pid 4177878 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@965 -- # kill 4177878 00:07:03.450 14:05:30 accel_rpc -- common/autotest_common.sh@970 -- # wait 4177878 00:07:04.017 00:07:04.017 real 0m1.079s 00:07:04.017 user 0m1.010s 00:07:04.017 sys 0m0.424s 00:07:04.017 14:05:31 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.017 14:05:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.017 ************************************ 00:07:04.017 END TEST accel_rpc 00:07:04.017 ************************************ 00:07:04.017 14:05:31 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:04.017 14:05:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.017 14:05:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.017 14:05:31 -- common/autotest_common.sh@10 -- # set +x 00:07:04.017 ************************************ 00:07:04.017 START TEST app_cmdline 00:07:04.017 ************************************ 00:07:04.017 14:05:31 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:04.017 * Looking for test storage... 00:07:04.017 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:04.017 14:05:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:04.017 14:05:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4178082 00:07:04.017 14:05:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:04.017 14:05:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4178082 00:07:04.017 14:05:31 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 4178082 ']' 00:07:04.017 14:05:31 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.017 14:05:31 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:04.017 14:05:31 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.017 14:05:31 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:04.017 14:05:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.017 [2024-07-24 14:05:31.362297] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:04.017 [2024-07-24 14:05:31.362380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4178082 ] 00:07:04.275 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.275 [2024-07-24 14:05:31.431971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.275 [2024-07-24 14:05:31.520279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.581 14:05:31 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.581 14:05:31 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:04.581 14:05:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:04.840 { 00:07:04.840 "version": "SPDK v24.05.1-pre git sha1 241d0f3c9", 00:07:04.840 "fields": { 00:07:04.840 "major": 24, 00:07:04.840 "minor": 5, 00:07:04.840 "patch": 1, 00:07:04.840 "suffix": "-pre", 00:07:04.840 "commit": "241d0f3c9" 00:07:04.840 } 00:07:04.840 } 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.840 14:05:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.840 14:05:32 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.098 request: 00:07:05.098 { 00:07:05.098 "method": "env_dpdk_get_mem_stats", 00:07:05.098 "req_id": 1 00:07:05.098 } 00:07:05.098 Got JSON-RPC error response 00:07:05.098 response: 00:07:05.098 { 00:07:05.098 "code": -32601, 00:07:05.098 "message": "Method not found" 00:07:05.098 } 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.098 14:05:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4178082 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 4178082 ']' 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 4178082 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:05.098 14:05:32 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.099 14:05:32 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4178082 00:07:05.099 14:05:32 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.099 14:05:32 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.099 14:05:32 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4178082' 00:07:05.099 killing process with pid 4178082 00:07:05.099 14:05:32 app_cmdline -- common/autotest_common.sh@965 -- # kill 4178082 00:07:05.099 14:05:32 app_cmdline -- common/autotest_common.sh@970 -- # wait 4178082 00:07:05.357 00:07:05.357 real 0m1.464s 00:07:05.357 user 0m1.795s 00:07:05.357 sys 0m0.458s 00:07:05.357 14:05:32 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.615 14:05:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.615 ************************************ 00:07:05.615 END TEST app_cmdline 00:07:05.615 ************************************ 00:07:05.615 14:05:32 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:05.615 14:05:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.615 14:05:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.615 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.615 ************************************ 00:07:05.615 START TEST version 00:07:05.615 ************************************ 00:07:05.615 14:05:32 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:05.615 * Looking for test storage... 00:07:05.615 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:05.615 14:05:32 version -- app/version.sh@17 -- # get_header_version major 00:07:05.615 14:05:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.615 14:05:32 version -- app/version.sh@17 -- # major=24 00:07:05.615 14:05:32 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.615 14:05:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.615 14:05:32 version -- app/version.sh@18 -- # minor=5 00:07:05.615 14:05:32 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.615 14:05:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.615 14:05:32 version -- app/version.sh@19 -- # patch=1 00:07:05.615 14:05:32 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.615 14:05:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.615 14:05:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.615 14:05:32 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.615 14:05:32 version -- app/version.sh@22 -- # version=24.5 00:07:05.615 14:05:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.615 14:05:32 version -- app/version.sh@25 -- # version=24.5.1 00:07:05.615 14:05:32 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:05.615 14:05:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:05.615 14:05:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.615 14:05:32 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:05.615 14:05:32 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:05.615 00:07:05.615 real 0m0.103s 00:07:05.615 user 0m0.046s 00:07:05.615 sys 0m0.077s 00:07:05.615 14:05:32 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.615 14:05:32 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.615 ************************************ 00:07:05.615 END TEST version 00:07:05.615 ************************************ 00:07:05.615 14:05:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:05.615 14:05:32 -- spdk/autotest.sh@198 -- # uname -s 00:07:05.615 14:05:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:05.615 14:05:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.615 14:05:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.615 14:05:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:05.615 14:05:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:05.615 14:05:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:05.615 14:05:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:05.615 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.615 14:05:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:05.615 14:05:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:05.615 14:05:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:05.615 14:05:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:05.615 14:05:32 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:07:05.615 14:05:32 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:05.615 14:05:32 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.615 14:05:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.615 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.615 ************************************ 00:07:05.615 START TEST nvmf_rdma 00:07:05.615 ************************************ 00:07:05.615 14:05:32 nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:05.875 * Looking for test storage... 00:07:05.875 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.875 14:05:32 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:05.875 14:05:33 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.875 14:05:33 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.875 14:05:33 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.875 14:05:33 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.875 14:05:33 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.875 14:05:33 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.875 14:05:33 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:05.875 14:05:33 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:05.875 14:05:33 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.875 14:05:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:05.875 14:05:33 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:05.875 14:05:33 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.875 14:05:33 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.875 14:05:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:05.875 ************************************ 00:07:05.875 START TEST nvmf_example 00:07:05.875 ************************************ 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:05.875 * Looking for test storage... 00:07:05.875 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.875 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.876 14:05:33 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:08.405 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:08.405 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:08.405 Found net devices under 0000:81:00.0: mlx_0_0 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:08.405 Found net devices under 0000:81:00.1: mlx_0_1 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:08.405 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:08.406 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:08.406 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:08.406 altname enp129s0f0np0 00:07:08.406 inet 192.168.100.8/24 scope global mlx_0_0 00:07:08.406 valid_lft forever preferred_lft forever 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:08.406 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:08.406 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:08.406 altname enp129s0f1np1 00:07:08.406 inet 192.168.100.9/24 scope global mlx_0_1 00:07:08.406 valid_lft forever preferred_lft forever 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:08.406 192.168.100.9' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:08.406 192.168.100.9' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:08.406 192.168.100.9' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4180400 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4180400 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 4180400 ']' 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:08.406 14:05:35 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.406 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.339 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:09.599 14:05:36 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:09.599 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.820 Initializing NVMe Controllers 00:07:21.820 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:21.820 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:21.820 Initialization complete. Launching workers. 00:07:21.820 ======================================================== 00:07:21.820 Latency(us) 00:07:21.820 Device Information : IOPS MiB/s Average min max 00:07:21.820 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20175.80 78.81 3171.68 826.82 16045.26 00:07:21.820 ======================================================== 00:07:21.820 Total : 20175.80 78.81 3171.68 826.82 16045.26 00:07:21.820 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:21.820 rmmod nvme_rdma 00:07:21.820 rmmod nvme_fabrics 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4180400 ']' 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4180400 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 4180400 ']' 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 4180400 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4180400 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4180400' 00:07:21.820 killing process with pid 4180400 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@965 -- # kill 4180400 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@970 -- # wait 4180400 00:07:21.820 nvmf threads initialize successfully 00:07:21.820 bdev subsystem init successfully 00:07:21.820 created a nvmf target service 00:07:21.820 create targets's poll groups done 00:07:21.820 all subsystems of target started 00:07:21.820 nvmf target is running 00:07:21.820 all subsystems of target stopped 00:07:21.820 destroy targets's poll groups done 00:07:21.820 destroyed the nvmf target service 00:07:21.820 bdev subsystem finish successfully 00:07:21.820 nvmf threads destroy successfully 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.820 00:07:21.820 real 0m15.379s 00:07:21.820 user 0m51.801s 00:07:21.820 sys 0m2.062s 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.820 14:05:48 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.820 ************************************ 00:07:21.820 END TEST nvmf_example 00:07:21.820 ************************************ 00:07:21.820 14:05:48 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:21.820 14:05:48 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:21.820 14:05:48 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.820 14:05:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:21.820 ************************************ 00:07:21.820 START TEST nvmf_filesystem 00:07:21.820 ************************************ 00:07:21.820 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:21.820 * Looking for test storage... 00:07:21.820 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:21.820 14:05:48 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:21.820 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:21.820 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:21.820 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:21.820 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:21.820 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:21.821 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:21.821 #define SPDK_CONFIG_H 00:07:21.821 #define SPDK_CONFIG_APPS 1 00:07:21.821 #define SPDK_CONFIG_ARCH native 00:07:21.821 #undef SPDK_CONFIG_ASAN 00:07:21.821 #undef SPDK_CONFIG_AVAHI 00:07:21.821 #undef SPDK_CONFIG_CET 00:07:21.821 #define SPDK_CONFIG_COVERAGE 1 00:07:21.821 #define SPDK_CONFIG_CROSS_PREFIX 00:07:21.821 #undef SPDK_CONFIG_CRYPTO 00:07:21.821 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:21.821 #undef SPDK_CONFIG_CUSTOMOCF 00:07:21.821 #undef SPDK_CONFIG_DAOS 00:07:21.821 #define SPDK_CONFIG_DAOS_DIR 00:07:21.821 #define SPDK_CONFIG_DEBUG 1 00:07:21.821 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:21.821 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:21.821 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:07:21.821 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:21.821 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:21.821 #undef SPDK_CONFIG_DPDK_UADK 00:07:21.821 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:21.821 #define SPDK_CONFIG_EXAMPLES 1 00:07:21.821 #undef SPDK_CONFIG_FC 00:07:21.821 #define SPDK_CONFIG_FC_PATH 00:07:21.822 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:21.822 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:21.822 #undef SPDK_CONFIG_FUSE 00:07:21.822 #undef SPDK_CONFIG_FUZZER 00:07:21.822 #define SPDK_CONFIG_FUZZER_LIB 00:07:21.822 #undef SPDK_CONFIG_GOLANG 00:07:21.822 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:21.822 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:21.822 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:21.822 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:21.822 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:21.822 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:21.822 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:21.822 #define SPDK_CONFIG_IDXD 1 00:07:21.822 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:21.822 #undef SPDK_CONFIG_IPSEC_MB 00:07:21.822 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:21.822 #define SPDK_CONFIG_ISAL 1 00:07:21.822 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:21.822 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:21.822 #define SPDK_CONFIG_LIBDIR 00:07:21.822 #undef SPDK_CONFIG_LTO 00:07:21.822 #define SPDK_CONFIG_MAX_LCORES 00:07:21.822 #define SPDK_CONFIG_NVME_CUSE 1 00:07:21.822 #undef SPDK_CONFIG_OCF 00:07:21.822 #define SPDK_CONFIG_OCF_PATH 00:07:21.822 #define SPDK_CONFIG_OPENSSL_PATH 00:07:21.822 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:21.822 #define SPDK_CONFIG_PGO_DIR 00:07:21.822 #undef SPDK_CONFIG_PGO_USE 00:07:21.822 #define SPDK_CONFIG_PREFIX /usr/local 00:07:21.822 #undef SPDK_CONFIG_RAID5F 00:07:21.822 #undef SPDK_CONFIG_RBD 00:07:21.822 #define SPDK_CONFIG_RDMA 1 00:07:21.822 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:21.822 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:21.822 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:21.822 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:21.822 #define SPDK_CONFIG_SHARED 1 00:07:21.822 #undef SPDK_CONFIG_SMA 00:07:21.822 #define SPDK_CONFIG_TESTS 1 00:07:21.822 #undef SPDK_CONFIG_TSAN 00:07:21.822 #define SPDK_CONFIG_UBLK 1 00:07:21.822 #define SPDK_CONFIG_UBSAN 1 00:07:21.822 #undef SPDK_CONFIG_UNIT_TESTS 00:07:21.822 #undef SPDK_CONFIG_URING 00:07:21.822 #define SPDK_CONFIG_URING_PATH 00:07:21.822 #undef SPDK_CONFIG_URING_ZNS 00:07:21.822 #undef SPDK_CONFIG_USDT 00:07:21.822 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:21.822 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:21.822 #undef SPDK_CONFIG_VFIO_USER 00:07:21.822 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:21.822 #define SPDK_CONFIG_VHOST 1 00:07:21.822 #define SPDK_CONFIG_VIRTIO 1 00:07:21.822 #undef SPDK_CONFIG_VTUNE 00:07:21.822 #define SPDK_CONFIG_VTUNE_DIR 00:07:21.822 #define SPDK_CONFIG_WERROR 1 00:07:21.822 #define SPDK_CONFIG_WPDK_DIR 00:07:21.822 #undef SPDK_CONFIG_XNVME 00:07:21.822 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:21.822 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # : rdma 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # : mlx5 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.823 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=rdma 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 4181949 ]] 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 4181949 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.Eqy8rj 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Eqy8rj/tests/target /tmp/spdk.Eqy8rj 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=932700160 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4351729664 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:21.824 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53359808512 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994602496 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8634793984 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30993924096 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997299200 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389937152 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398923776 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8986624 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996545536 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997303296 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=757760 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199455744 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199459840 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:21.825 * Looking for test storage... 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53359808512 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10849386496 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:21.825 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.825 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:21.826 14:05:48 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.359 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:24.359 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:24.360 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:24.360 Found net devices under 0000:81:00.0: mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:24.360 Found net devices under 0000:81:00.1: mlx_0_1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:24.360 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:24.360 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:24.360 altname enp129s0f0np0 00:07:24.360 inet 192.168.100.8/24 scope global mlx_0_0 00:07:24.360 valid_lft forever preferred_lft forever 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:24.360 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:24.360 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:24.360 altname enp129s0f1np1 00:07:24.360 inet 192.168.100.9/24 scope global mlx_0_1 00:07:24.360 valid_lft forever preferred_lft forever 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:24.360 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:24.361 192.168.100.9' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:24.361 192.168.100.9' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:24.361 192.168.100.9' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.361 ************************************ 00:07:24.361 START TEST nvmf_filesystem_no_in_capsule 00:07:24.361 ************************************ 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4183999 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4183999 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 4183999 ']' 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.361 [2024-07-24 14:05:51.321818] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:24.361 [2024-07-24 14:05:51.321905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.361 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.361 [2024-07-24 14:05:51.389667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.361 [2024-07-24 14:05:51.479550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.361 [2024-07-24 14:05:51.479621] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.361 [2024-07-24 14:05:51.479634] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.361 [2024-07-24 14:05:51.479645] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.361 [2024-07-24 14:05:51.479655] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.361 [2024-07-24 14:05:51.479736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.361 [2024-07-24 14:05:51.479838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.361 [2024-07-24 14:05:51.479868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.361 [2024-07-24 14:05:51.479871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.361 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.361 [2024-07-24 14:05:51.615368] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:24.361 [2024-07-24 14:05:51.638971] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb8da00/0xb91ef0) succeed. 00:07:24.361 [2024-07-24 14:05:51.649965] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb8eff0/0xbd3580) succeed. 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.620 Malloc1 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.620 [2024-07-24 14:05:51.946800] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.620 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:24.620 { 00:07:24.620 "name": "Malloc1", 00:07:24.620 "aliases": [ 00:07:24.620 "e3b2ed53-afe4-45f5-b04f-013327f04333" 00:07:24.620 ], 00:07:24.620 "product_name": "Malloc disk", 00:07:24.620 "block_size": 512, 00:07:24.620 "num_blocks": 1048576, 00:07:24.620 "uuid": "e3b2ed53-afe4-45f5-b04f-013327f04333", 00:07:24.620 "assigned_rate_limits": { 00:07:24.620 "rw_ios_per_sec": 0, 00:07:24.620 "rw_mbytes_per_sec": 0, 00:07:24.620 "r_mbytes_per_sec": 0, 00:07:24.620 "w_mbytes_per_sec": 0 00:07:24.620 }, 00:07:24.620 "claimed": true, 00:07:24.620 "claim_type": "exclusive_write", 00:07:24.620 "zoned": false, 00:07:24.620 "supported_io_types": { 00:07:24.621 "read": true, 00:07:24.621 "write": true, 00:07:24.621 "unmap": true, 00:07:24.621 "write_zeroes": true, 00:07:24.621 "flush": true, 00:07:24.621 "reset": true, 00:07:24.621 "compare": false, 00:07:24.621 "compare_and_write": false, 00:07:24.621 "abort": true, 00:07:24.621 "nvme_admin": false, 00:07:24.621 "nvme_io": false 00:07:24.621 }, 00:07:24.621 "memory_domains": [ 00:07:24.621 { 00:07:24.621 "dma_device_id": "system", 00:07:24.621 "dma_device_type": 1 00:07:24.621 }, 00:07:24.621 { 00:07:24.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.621 "dma_device_type": 2 00:07:24.621 } 00:07:24.621 ], 00:07:24.621 "driver_specific": {} 00:07:24.621 } 00:07:24.621 ]' 00:07:24.621 14:05:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:24.878 14:05:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:24.878 14:05:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:24.878 14:05:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:24.878 14:05:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:24.878 14:05:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:24.878 14:05:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:24.878 14:05:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:25.810 14:05:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.810 14:05:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:25.810 14:05:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.810 14:05:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:25.810 14:05:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:28.335 14:05:55 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.302 ************************************ 00:07:29.302 START TEST filesystem_ext4 00:07:29.302 ************************************ 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:29.302 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:29.302 mke2fs 1.46.5 (30-Dec-2021) 00:07:29.560 Discarding device blocks: 0/522240 done 00:07:29.560 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:29.560 Filesystem UUID: e5ebec14-83f6-473f-a4ae-1ac26be13146 00:07:29.560 Superblock backups stored on blocks: 00:07:29.560 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:29.560 00:07:29.560 Allocating group tables: 0/64 done 00:07:29.560 Writing inode tables: 0/64 done 00:07:29.560 Creating journal (8192 blocks): done 00:07:29.561 Writing superblocks and filesystem accounting information: 0/64 done 00:07:29.561 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4183999 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.561 00:07:29.561 real 0m0.196s 00:07:29.561 user 0m0.022s 00:07:29.561 sys 0m0.045s 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:29.561 ************************************ 00:07:29.561 END TEST filesystem_ext4 00:07:29.561 ************************************ 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.561 ************************************ 00:07:29.561 START TEST filesystem_btrfs 00:07:29.561 ************************************ 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:29.561 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:29.819 btrfs-progs v6.6.2 00:07:29.819 See https://btrfs.readthedocs.io for more information. 00:07:29.819 00:07:29.819 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:29.819 NOTE: several default settings have changed in version 5.15, please make sure 00:07:29.819 this does not affect your deployments: 00:07:29.819 - DUP for metadata (-m dup) 00:07:29.819 - enabled no-holes (-O no-holes) 00:07:29.819 - enabled free-space-tree (-R free-space-tree) 00:07:29.819 00:07:29.819 Label: (null) 00:07:29.819 UUID: 87bd2eca-1bc2-47a5-80ae-1120fc853683 00:07:29.819 Node size: 16384 00:07:29.819 Sector size: 4096 00:07:29.819 Filesystem size: 510.00MiB 00:07:29.819 Block group profiles: 00:07:29.819 Data: single 8.00MiB 00:07:29.819 Metadata: DUP 32.00MiB 00:07:29.819 System: DUP 8.00MiB 00:07:29.819 SSD detected: yes 00:07:29.819 Zoned device: no 00:07:29.819 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:29.819 Runtime features: free-space-tree 00:07:29.819 Checksum: crc32c 00:07:29.819 Number of devices: 1 00:07:29.819 Devices: 00:07:29.819 ID SIZE PATH 00:07:29.819 1 510.00MiB /dev/nvme0n1p1 00:07:29.819 00:07:29.819 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:29.819 14:05:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4183999 00:07:29.819 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.820 00:07:29.820 real 0m0.283s 00:07:29.820 user 0m0.016s 00:07:29.820 sys 0m0.112s 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:29.820 ************************************ 00:07:29.820 END TEST filesystem_btrfs 00:07:29.820 ************************************ 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.820 ************************************ 00:07:29.820 START TEST filesystem_xfs 00:07:29.820 ************************************ 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:29.820 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:30.078 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:30.078 = sectsz=512 attr=2, projid32bit=1 00:07:30.078 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:30.078 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:30.078 data = bsize=4096 blocks=130560, imaxpct=25 00:07:30.078 = sunit=0 swidth=0 blks 00:07:30.078 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:30.078 log =internal log bsize=4096 blocks=16384, version=2 00:07:30.078 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:30.078 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.078 Discarding blocks...Done. 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4183999 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.078 00:07:30.078 real 0m0.206s 00:07:30.078 user 0m0.016s 00:07:30.078 sys 0m0.049s 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.078 ************************************ 00:07:30.078 END TEST filesystem_xfs 00:07:30.078 ************************************ 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:30.078 14:05:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4183999 00:07:31.451 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 4183999 ']' 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 4183999 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4183999 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4183999' 00:07:31.452 killing process with pid 4183999 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 4183999 00:07:31.452 14:05:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 4183999 00:07:31.710 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:31.710 00:07:31.710 real 0m7.782s 00:07:31.710 user 0m30.025s 00:07:31.710 sys 0m0.958s 00:07:31.710 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.710 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.710 ************************************ 00:07:31.710 END TEST nvmf_filesystem_no_in_capsule 00:07:31.710 ************************************ 00:07:31.710 14:05:59 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:31.710 14:05:59 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.968 ************************************ 00:07:31.968 START TEST nvmf_filesystem_in_capsule 00:07:31.968 ************************************ 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4185125 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4185125 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 4185125 ']' 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.968 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.968 [2024-07-24 14:05:59.150521] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:31.968 [2024-07-24 14:05:59.150587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.968 [2024-07-24 14:05:59.216358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.968 [2024-07-24 14:05:59.304648] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.968 [2024-07-24 14:05:59.304717] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.968 [2024-07-24 14:05:59.304744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.968 [2024-07-24 14:05:59.304756] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.968 [2024-07-24 14:05:59.304765] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.968 [2024-07-24 14:05:59.304902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.968 [2024-07-24 14:05:59.304969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.968 [2024-07-24 14:05:59.304993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.968 [2024-07-24 14:05:59.304995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.226 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:32.226 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:32.226 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.227 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.227 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.227 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.227 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:32.227 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:32.227 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.227 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.227 [2024-07-24 14:05:59.485368] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd45840/0xd49d30) succeed. 00:07:32.227 [2024-07-24 14:05:59.496675] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd46e30/0xd8b3c0) succeed. 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.485 Malloc1 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.485 [2024-07-24 14:05:59.819469] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.485 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.486 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.486 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:32.486 { 00:07:32.486 "name": "Malloc1", 00:07:32.486 "aliases": [ 00:07:32.486 "3b6efd20-8218-4a91-b42b-6fb49bae6fa7" 00:07:32.486 ], 00:07:32.486 "product_name": "Malloc disk", 00:07:32.486 "block_size": 512, 00:07:32.486 "num_blocks": 1048576, 00:07:32.486 "uuid": "3b6efd20-8218-4a91-b42b-6fb49bae6fa7", 00:07:32.486 "assigned_rate_limits": { 00:07:32.486 "rw_ios_per_sec": 0, 00:07:32.486 "rw_mbytes_per_sec": 0, 00:07:32.486 "r_mbytes_per_sec": 0, 00:07:32.486 "w_mbytes_per_sec": 0 00:07:32.486 }, 00:07:32.486 "claimed": true, 00:07:32.486 "claim_type": "exclusive_write", 00:07:32.486 "zoned": false, 00:07:32.486 "supported_io_types": { 00:07:32.486 "read": true, 00:07:32.486 "write": true, 00:07:32.486 "unmap": true, 00:07:32.486 "write_zeroes": true, 00:07:32.486 "flush": true, 00:07:32.486 "reset": true, 00:07:32.486 "compare": false, 00:07:32.486 "compare_and_write": false, 00:07:32.486 "abort": true, 00:07:32.486 "nvme_admin": false, 00:07:32.486 "nvme_io": false 00:07:32.486 }, 00:07:32.486 "memory_domains": [ 00:07:32.486 { 00:07:32.486 "dma_device_id": "system", 00:07:32.486 "dma_device_type": 1 00:07:32.486 }, 00:07:32.486 { 00:07:32.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.486 "dma_device_type": 2 00:07:32.486 } 00:07:32.486 ], 00:07:32.486 "driver_specific": {} 00:07:32.486 } 00:07:32.486 ]' 00:07:32.486 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:32.744 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:32.744 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:32.744 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:32.744 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:32.744 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:32.744 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:32.744 14:05:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:33.677 14:06:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:33.677 14:06:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:33.677 14:06:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:33.677 14:06:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:33.677 14:06:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:36.201 14:06:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:37.134 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:37.134 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:37.134 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:37.134 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.134 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.392 ************************************ 00:07:37.392 START TEST filesystem_in_capsule_ext4 00:07:37.392 ************************************ 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:37.392 mke2fs 1.46.5 (30-Dec-2021) 00:07:37.392 Discarding device blocks: 0/522240 done 00:07:37.392 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:37.392 Filesystem UUID: c96e63f7-6046-4fa6-997b-d4717e087a80 00:07:37.392 Superblock backups stored on blocks: 00:07:37.392 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:37.392 00:07:37.392 Allocating group tables: 0/64 done 00:07:37.392 Writing inode tables: 0/64 done 00:07:37.392 Creating journal (8192 blocks): done 00:07:37.392 Writing superblocks and filesystem accounting information: 0/64 done 00:07:37.392 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4185125 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.392 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.392 00:07:37.392 real 0m0.189s 00:07:37.392 user 0m0.010s 00:07:37.392 sys 0m0.054s 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:37.393 ************************************ 00:07:37.393 END TEST filesystem_in_capsule_ext4 00:07:37.393 ************************************ 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.393 ************************************ 00:07:37.393 START TEST filesystem_in_capsule_btrfs 00:07:37.393 ************************************ 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:37.393 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:37.651 btrfs-progs v6.6.2 00:07:37.651 See https://btrfs.readthedocs.io for more information. 00:07:37.651 00:07:37.651 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:37.651 NOTE: several default settings have changed in version 5.15, please make sure 00:07:37.651 this does not affect your deployments: 00:07:37.651 - DUP for metadata (-m dup) 00:07:37.651 - enabled no-holes (-O no-holes) 00:07:37.651 - enabled free-space-tree (-R free-space-tree) 00:07:37.651 00:07:37.651 Label: (null) 00:07:37.651 UUID: 48022d85-4680-4c48-8fc2-aa4ec4a57716 00:07:37.651 Node size: 16384 00:07:37.651 Sector size: 4096 00:07:37.651 Filesystem size: 510.00MiB 00:07:37.651 Block group profiles: 00:07:37.651 Data: single 8.00MiB 00:07:37.651 Metadata: DUP 32.00MiB 00:07:37.651 System: DUP 8.00MiB 00:07:37.651 SSD detected: yes 00:07:37.651 Zoned device: no 00:07:37.651 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:37.651 Runtime features: free-space-tree 00:07:37.651 Checksum: crc32c 00:07:37.651 Number of devices: 1 00:07:37.651 Devices: 00:07:37.651 ID SIZE PATH 00:07:37.651 1 510.00MiB /dev/nvme0n1p1 00:07:37.651 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4185125 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.651 14:06:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.651 00:07:37.651 real 0m0.255s 00:07:37.651 user 0m0.018s 00:07:37.651 sys 0m0.108s 00:07:37.651 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.651 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:37.651 ************************************ 00:07:37.651 END TEST filesystem_in_capsule_btrfs 00:07:37.651 ************************************ 00:07:37.651 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:37.651 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:37.651 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.651 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 ************************************ 00:07:37.909 START TEST filesystem_in_capsule_xfs 00:07:37.909 ************************************ 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:37.909 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:37.909 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:37.909 = sectsz=512 attr=2, projid32bit=1 00:07:37.909 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:37.909 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:37.909 data = bsize=4096 blocks=130560, imaxpct=25 00:07:37.909 = sunit=0 swidth=0 blks 00:07:37.909 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:37.909 log =internal log bsize=4096 blocks=16384, version=2 00:07:37.909 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:37.909 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:37.910 Discarding blocks...Done. 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4185125 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.910 00:07:37.910 real 0m0.202s 00:07:37.910 user 0m0.011s 00:07:37.910 sys 0m0.054s 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:37.910 ************************************ 00:07:37.910 END TEST filesystem_in_capsule_xfs 00:07:37.910 ************************************ 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:37.910 14:06:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:39.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:39.281 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4185125 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 4185125 ']' 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 4185125 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4185125 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4185125' 00:07:39.282 killing process with pid 4185125 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 4185125 00:07:39.282 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 4185125 00:07:39.540 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:39.540 00:07:39.540 real 0m7.797s 00:07:39.540 user 0m29.963s 00:07:39.540 sys 0m0.992s 00:07:39.540 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.540 14:06:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.540 ************************************ 00:07:39.540 END TEST nvmf_filesystem_in_capsule 00:07:39.540 ************************************ 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:39.798 rmmod nvme_rdma 00:07:39.798 rmmod nvme_fabrics 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.798 14:06:06 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:39.798 00:07:39.798 real 0m18.518s 00:07:39.799 user 1m1.115s 00:07:39.799 sys 0m3.858s 00:07:39.799 14:06:06 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.799 14:06:06 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.799 ************************************ 00:07:39.799 END TEST nvmf_filesystem 00:07:39.799 ************************************ 00:07:39.799 14:06:06 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:39.799 14:06:06 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:39.799 14:06:06 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.799 14:06:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:39.799 ************************************ 00:07:39.799 START TEST nvmf_target_discovery 00:07:39.799 ************************************ 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:07:39.799 * Looking for test storage... 00:07:39.799 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.799 14:06:07 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:42.330 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:42.330 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:42.330 Found net devices under 0000:81:00.0: mlx_0_0 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:42.330 Found net devices under 0000:81:00.1: mlx_0_1 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:42.330 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:42.330 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:42.330 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:42.331 altname enp129s0f0np0 00:07:42.331 inet 192.168.100.8/24 scope global mlx_0_0 00:07:42.331 valid_lft forever preferred_lft forever 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:42.331 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:42.331 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:42.331 altname enp129s0f1np1 00:07:42.331 inet 192.168.100.9/24 scope global mlx_0_1 00:07:42.331 valid_lft forever preferred_lft forever 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:42.331 192.168.100.9' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:42.331 192.168.100.9' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:42.331 192.168.100.9' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4188831 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4188831 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 4188831 ']' 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:42.331 14:06:09 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.589 [2024-07-24 14:06:09.714083] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:42.589 [2024-07-24 14:06:09.714173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.589 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.589 [2024-07-24 14:06:09.788852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.589 [2024-07-24 14:06:09.884547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.589 [2024-07-24 14:06:09.884620] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.589 [2024-07-24 14:06:09.884635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.589 [2024-07-24 14:06:09.884649] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.589 [2024-07-24 14:06:09.884660] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.589 [2024-07-24 14:06:09.884718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.590 [2024-07-24 14:06:09.884751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.590 [2024-07-24 14:06:09.884874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.590 [2024-07-24 14:06:09.884879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.848 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.848 [2024-07-24 14:06:10.063821] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19909e0/0x1994ed0) succeed. 00:07:42.848 [2024-07-24 14:06:10.074715] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1991fd0/0x19d6560) succeed. 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 Null1 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 [2024-07-24 14:06:10.257841] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 Null2 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 Null3 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.106 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 Null4 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 4420 00:07:43.107 00:07:43.107 Discovery Log Number of Records 6, Generation counter 6 00:07:43.107 =====Discovery Log Entry 0====== 00:07:43.107 trtype: rdma 00:07:43.107 adrfam: ipv4 00:07:43.107 subtype: current discovery subsystem 00:07:43.107 treq: not required 00:07:43.107 portid: 0 00:07:43.107 trsvcid: 4420 00:07:43.107 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.107 traddr: 192.168.100.8 00:07:43.107 eflags: explicit discovery connections, duplicate discovery information 00:07:43.107 rdma_prtype: not specified 00:07:43.107 rdma_qptype: connected 00:07:43.107 rdma_cms: rdma-cm 00:07:43.107 rdma_pkey: 0x0000 00:07:43.107 =====Discovery Log Entry 1====== 00:07:43.107 trtype: rdma 00:07:43.107 adrfam: ipv4 00:07:43.107 subtype: nvme subsystem 00:07:43.107 treq: not required 00:07:43.107 portid: 0 00:07:43.107 trsvcid: 4420 00:07:43.107 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:43.107 traddr: 192.168.100.8 00:07:43.107 eflags: none 00:07:43.107 rdma_prtype: not specified 00:07:43.107 rdma_qptype: connected 00:07:43.107 rdma_cms: rdma-cm 00:07:43.107 rdma_pkey: 0x0000 00:07:43.107 =====Discovery Log Entry 2====== 00:07:43.107 trtype: rdma 00:07:43.107 adrfam: ipv4 00:07:43.107 subtype: nvme subsystem 00:07:43.107 treq: not required 00:07:43.107 portid: 0 00:07:43.107 trsvcid: 4420 00:07:43.107 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:43.107 traddr: 192.168.100.8 00:07:43.107 eflags: none 00:07:43.107 rdma_prtype: not specified 00:07:43.107 rdma_qptype: connected 00:07:43.107 rdma_cms: rdma-cm 00:07:43.107 rdma_pkey: 0x0000 00:07:43.107 =====Discovery Log Entry 3====== 00:07:43.107 trtype: rdma 00:07:43.107 adrfam: ipv4 00:07:43.107 subtype: nvme subsystem 00:07:43.107 treq: not required 00:07:43.107 portid: 0 00:07:43.107 trsvcid: 4420 00:07:43.107 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:43.107 traddr: 192.168.100.8 00:07:43.107 eflags: none 00:07:43.107 rdma_prtype: not specified 00:07:43.107 rdma_qptype: connected 00:07:43.107 rdma_cms: rdma-cm 00:07:43.107 rdma_pkey: 0x0000 00:07:43.107 =====Discovery Log Entry 4====== 00:07:43.107 trtype: rdma 00:07:43.107 adrfam: ipv4 00:07:43.107 subtype: nvme subsystem 00:07:43.107 treq: not required 00:07:43.107 portid: 0 00:07:43.107 trsvcid: 4420 00:07:43.107 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:43.107 traddr: 192.168.100.8 00:07:43.107 eflags: none 00:07:43.107 rdma_prtype: not specified 00:07:43.107 rdma_qptype: connected 00:07:43.107 rdma_cms: rdma-cm 00:07:43.107 rdma_pkey: 0x0000 00:07:43.107 =====Discovery Log Entry 5====== 00:07:43.107 trtype: rdma 00:07:43.107 adrfam: ipv4 00:07:43.107 subtype: discovery subsystem referral 00:07:43.107 treq: not required 00:07:43.107 portid: 0 00:07:43.107 trsvcid: 4430 00:07:43.107 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.107 traddr: 192.168.100.8 00:07:43.107 eflags: none 00:07:43.107 rdma_prtype: unrecognized 00:07:43.107 rdma_qptype: unrecognized 00:07:43.107 rdma_cms: unrecognized 00:07:43.107 rdma_pkey: 0x0000 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:43.107 Perform nvmf subsystem discovery via RPC 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.107 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.366 [ 00:07:43.366 { 00:07:43.366 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:43.366 "subtype": "Discovery", 00:07:43.366 "listen_addresses": [ 00:07:43.366 { 00:07:43.366 "trtype": "RDMA", 00:07:43.366 "adrfam": "IPv4", 00:07:43.366 "traddr": "192.168.100.8", 00:07:43.366 "trsvcid": "4420" 00:07:43.366 } 00:07:43.366 ], 00:07:43.366 "allow_any_host": true, 00:07:43.366 "hosts": [] 00:07:43.366 }, 00:07:43.366 { 00:07:43.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.366 "subtype": "NVMe", 00:07:43.366 "listen_addresses": [ 00:07:43.366 { 00:07:43.366 "trtype": "RDMA", 00:07:43.366 "adrfam": "IPv4", 00:07:43.366 "traddr": "192.168.100.8", 00:07:43.366 "trsvcid": "4420" 00:07:43.366 } 00:07:43.366 ], 00:07:43.366 "allow_any_host": true, 00:07:43.366 "hosts": [], 00:07:43.366 "serial_number": "SPDK00000000000001", 00:07:43.366 "model_number": "SPDK bdev Controller", 00:07:43.366 "max_namespaces": 32, 00:07:43.366 "min_cntlid": 1, 00:07:43.366 "max_cntlid": 65519, 00:07:43.366 "namespaces": [ 00:07:43.366 { 00:07:43.366 "nsid": 1, 00:07:43.366 "bdev_name": "Null1", 00:07:43.366 "name": "Null1", 00:07:43.366 "nguid": "01A7F139DB954F57910415EF50D33A59", 00:07:43.366 "uuid": "01a7f139-db95-4f57-9104-15ef50d33a59" 00:07:43.366 } 00:07:43.366 ] 00:07:43.366 }, 00:07:43.366 { 00:07:43.366 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:43.366 "subtype": "NVMe", 00:07:43.366 "listen_addresses": [ 00:07:43.366 { 00:07:43.366 "trtype": "RDMA", 00:07:43.366 "adrfam": "IPv4", 00:07:43.366 "traddr": "192.168.100.8", 00:07:43.366 "trsvcid": "4420" 00:07:43.366 } 00:07:43.366 ], 00:07:43.366 "allow_any_host": true, 00:07:43.366 "hosts": [], 00:07:43.366 "serial_number": "SPDK00000000000002", 00:07:43.366 "model_number": "SPDK bdev Controller", 00:07:43.366 "max_namespaces": 32, 00:07:43.366 "min_cntlid": 1, 00:07:43.366 "max_cntlid": 65519, 00:07:43.366 "namespaces": [ 00:07:43.366 { 00:07:43.366 "nsid": 1, 00:07:43.366 "bdev_name": "Null2", 00:07:43.366 "name": "Null2", 00:07:43.366 "nguid": "E63D7ADE104D4B96949FE17DAFDAEA0F", 00:07:43.366 "uuid": "e63d7ade-104d-4b96-949f-e17dafdaea0f" 00:07:43.366 } 00:07:43.366 ] 00:07:43.366 }, 00:07:43.366 { 00:07:43.366 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:43.366 "subtype": "NVMe", 00:07:43.366 "listen_addresses": [ 00:07:43.366 { 00:07:43.366 "trtype": "RDMA", 00:07:43.366 "adrfam": "IPv4", 00:07:43.366 "traddr": "192.168.100.8", 00:07:43.366 "trsvcid": "4420" 00:07:43.366 } 00:07:43.366 ], 00:07:43.366 "allow_any_host": true, 00:07:43.366 "hosts": [], 00:07:43.366 "serial_number": "SPDK00000000000003", 00:07:43.366 "model_number": "SPDK bdev Controller", 00:07:43.366 "max_namespaces": 32, 00:07:43.366 "min_cntlid": 1, 00:07:43.366 "max_cntlid": 65519, 00:07:43.366 "namespaces": [ 00:07:43.366 { 00:07:43.366 "nsid": 1, 00:07:43.366 "bdev_name": "Null3", 00:07:43.366 "name": "Null3", 00:07:43.366 "nguid": "54A9ADC6C3C74DA6B82D88591FBB418F", 00:07:43.366 "uuid": "54a9adc6-c3c7-4da6-b82d-88591fbb418f" 00:07:43.367 } 00:07:43.367 ] 00:07:43.367 }, 00:07:43.367 { 00:07:43.367 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:43.367 "subtype": "NVMe", 00:07:43.367 "listen_addresses": [ 00:07:43.367 { 00:07:43.367 "trtype": "RDMA", 00:07:43.367 "adrfam": "IPv4", 00:07:43.367 "traddr": "192.168.100.8", 00:07:43.367 "trsvcid": "4420" 00:07:43.367 } 00:07:43.367 ], 00:07:43.367 "allow_any_host": true, 00:07:43.367 "hosts": [], 00:07:43.367 "serial_number": "SPDK00000000000004", 00:07:43.367 "model_number": "SPDK bdev Controller", 00:07:43.367 "max_namespaces": 32, 00:07:43.367 "min_cntlid": 1, 00:07:43.367 "max_cntlid": 65519, 00:07:43.367 "namespaces": [ 00:07:43.367 { 00:07:43.367 "nsid": 1, 00:07:43.367 "bdev_name": "Null4", 00:07:43.367 "name": "Null4", 00:07:43.367 "nguid": "F62AB43BB677466E844636E7CD1B90BE", 00:07:43.367 "uuid": "f62ab43b-b677-466e-8446-36e7cd1b90be" 00:07:43.367 } 00:07:43.367 ] 00:07:43.367 } 00:07:43.367 ] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:43.367 rmmod nvme_rdma 00:07:43.367 rmmod nvme_fabrics 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4188831 ']' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4188831 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 4188831 ']' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 4188831 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4188831 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4188831' 00:07:43.367 killing process with pid 4188831 00:07:43.367 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 4188831 00:07:43.368 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 4188831 00:07:43.626 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.626 14:06:10 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:43.626 00:07:43.626 real 0m3.955s 00:07:43.626 user 0m5.091s 00:07:43.626 sys 0m2.234s 00:07:43.626 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.626 14:06:10 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.626 ************************************ 00:07:43.626 END TEST nvmf_target_discovery 00:07:43.626 ************************************ 00:07:43.884 14:06:11 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:43.884 14:06:11 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:43.884 14:06:11 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.884 14:06:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:43.884 ************************************ 00:07:43.884 START TEST nvmf_referrals 00:07:43.884 ************************************ 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:07:43.884 * Looking for test storage... 00:07:43.884 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.884 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.885 14:06:11 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:46.494 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:46.494 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:46.494 Found net devices under 0000:81:00.0: mlx_0_0 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:46.494 Found net devices under 0000:81:00.1: mlx_0_1 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.494 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:46.495 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:46.495 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:46.495 altname enp129s0f0np0 00:07:46.495 inet 192.168.100.8/24 scope global mlx_0_0 00:07:46.495 valid_lft forever preferred_lft forever 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:46.495 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:46.495 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:46.495 altname enp129s0f1np1 00:07:46.495 inet 192.168.100.9/24 scope global mlx_0_1 00:07:46.495 valid_lft forever preferred_lft forever 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:46.495 192.168.100.9' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:46.495 192.168.100.9' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:46.495 192.168.100.9' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4190943 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4190943 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 4190943 ']' 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.495 14:06:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.495 [2024-07-24 14:06:13.770335] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:46.495 [2024-07-24 14:06:13.770420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.495 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.495 [2024-07-24 14:06:13.837388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.753 [2024-07-24 14:06:13.925902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.753 [2024-07-24 14:06:13.925970] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.753 [2024-07-24 14:06:13.925984] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.753 [2024-07-24 14:06:13.925996] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.753 [2024-07-24 14:06:13.926007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.753 [2024-07-24 14:06:13.926065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.753 [2024-07-24 14:06:13.926127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.753 [2024-07-24 14:06:13.926157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.753 [2024-07-24 14:06:13.926159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.753 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:46.753 [2024-07-24 14:06:14.108656] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ff09e0/0x1ff4ed0) succeed. 00:07:46.753 [2024-07-24 14:06:14.119472] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ff1fd0/0x2036560) succeed. 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.011 [2024-07-24 14:06:14.260820] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.011 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.268 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.269 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.269 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:47.269 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.269 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:47.526 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:47.782 14:06:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:47.782 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.782 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:47.783 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:48.039 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:48.040 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:48.040 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:48.040 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:48.040 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:48.040 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:48.296 rmmod nvme_rdma 00:07:48.296 rmmod nvme_fabrics 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4190943 ']' 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4190943 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 4190943 ']' 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 4190943 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4190943 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:48.296 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4190943' 00:07:48.296 killing process with pid 4190943 00:07:48.297 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 4190943 00:07:48.297 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 4190943 00:07:48.554 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.554 14:06:15 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:48.554 00:07:48.554 real 0m4.823s 00:07:48.554 user 0m9.038s 00:07:48.554 sys 0m2.480s 00:07:48.554 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.554 14:06:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:48.554 ************************************ 00:07:48.554 END TEST nvmf_referrals 00:07:48.554 ************************************ 00:07:48.554 14:06:15 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:48.554 14:06:15 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.554 14:06:15 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.554 14:06:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:48.554 ************************************ 00:07:48.554 START TEST nvmf_connect_disconnect 00:07:48.554 ************************************ 00:07:48.554 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:48.812 * Looking for test storage... 00:07:48.812 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:48.812 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.813 14:06:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.340 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:07:51.341 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:07:51.341 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:07:51.341 Found net devices under 0000:81:00.0: mlx_0_0 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:07:51.341 Found net devices under 0000:81:00.1: mlx_0_1 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:51.341 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:51.341 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:07:51.341 altname enp129s0f0np0 00:07:51.341 inet 192.168.100.8/24 scope global mlx_0_0 00:07:51.341 valid_lft forever preferred_lft forever 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:51.341 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:51.341 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:07:51.341 altname enp129s0f1np1 00:07:51.341 inet 192.168.100.9/24 scope global mlx_0_1 00:07:51.341 valid_lft forever preferred_lft forever 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:51.341 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:51.342 192.168.100.9' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:51.342 192.168.100.9' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:51.342 192.168.100.9' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:51.342 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4193359 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4193359 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 4193359 ']' 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.600 14:06:18 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.600 [2024-07-24 14:06:18.755921] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:07:51.600 [2024-07-24 14:06:18.756022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.600 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.600 [2024-07-24 14:06:18.835839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.600 [2024-07-24 14:06:18.927824] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.600 [2024-07-24 14:06:18.927875] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.600 [2024-07-24 14:06:18.927898] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.600 [2024-07-24 14:06:18.927910] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.600 [2024-07-24 14:06:18.927920] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.600 [2024-07-24 14:06:18.930813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.600 [2024-07-24 14:06:18.930856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.600 [2024-07-24 14:06:18.930915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.600 [2024-07-24 14:06:18.930918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.857 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.857 [2024-07-24 14:06:19.085663] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:51.857 [2024-07-24 14:06:19.110163] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bb4a00/0x1bb8ef0) succeed. 00:07:51.857 [2024-07-24 14:06:19.121065] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bb5ff0/0x1bfa580) succeed. 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:52.114 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:52.115 [2024-07-24 14:06:19.287544] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:52.115 14:06:19 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:55.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:20.465 rmmod nvme_rdma 00:13:20.465 rmmod nvme_fabrics 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4193359 ']' 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4193359 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 4193359 ']' 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 4193359 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4193359 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4193359' 00:13:20.465 killing process with pid 4193359 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 4193359 00:13:20.465 14:11:47 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 4193359 00:13:20.723 14:11:48 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.723 14:11:48 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:20.723 00:13:20.723 real 5m32.143s 00:13:20.723 user 21m49.702s 00:13:20.723 sys 0m12.792s 00:13:20.723 14:11:48 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.723 14:11:48 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:20.723 ************************************ 00:13:20.723 END TEST nvmf_connect_disconnect 00:13:20.723 ************************************ 00:13:20.723 14:11:48 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:13:20.723 14:11:48 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:20.723 14:11:48 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:20.723 14:11:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:20.980 ************************************ 00:13:20.980 START TEST nvmf_multitarget 00:13:20.980 ************************************ 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:13:20.980 * Looking for test storage... 00:13:20.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.980 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.981 14:11:48 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:13:23.509 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:13:23.509 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:23.509 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:13:23.510 Found net devices under 0000:81:00.0: mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:13:23.510 Found net devices under 0000:81:00.1: mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:23.510 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.510 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:13:23.510 altname enp129s0f0np0 00:13:23.510 inet 192.168.100.8/24 scope global mlx_0_0 00:13:23.510 valid_lft forever preferred_lft forever 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:23.510 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:23.510 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:13:23.510 altname enp129s0f1np1 00:13:23.510 inet 192.168.100.9/24 scope global mlx_0_1 00:13:23.510 valid_lft forever preferred_lft forever 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:23.510 192.168.100.9' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:23.510 192.168.100.9' 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:23.510 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:23.510 192.168.100.9' 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=42276 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 42276 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 42276 ']' 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:23.511 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.511 [2024-07-24 14:11:50.704882] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:23.511 [2024-07-24 14:11:50.704967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.511 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.511 [2024-07-24 14:11:50.777332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.511 [2024-07-24 14:11:50.867330] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.511 [2024-07-24 14:11:50.867381] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.511 [2024-07-24 14:11:50.867406] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.511 [2024-07-24 14:11:50.867419] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.511 [2024-07-24 14:11:50.867432] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.511 [2024-07-24 14:11:50.867493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.511 [2024-07-24 14:11:50.867560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.511 [2024-07-24 14:11:50.867650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.511 [2024-07-24 14:11:50.867653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.769 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:23.769 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:13:23.769 14:11:50 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.769 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.769 14:11:50 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.769 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.769 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:23.769 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:23.769 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:23.769 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:23.769 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:24.026 "nvmf_tgt_1" 00:13:24.026 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:24.026 "nvmf_tgt_2" 00:13:24.026 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:24.026 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:24.283 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:24.283 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:24.283 true 00:13:24.283 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:24.541 true 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:24.541 rmmod nvme_rdma 00:13:24.541 rmmod nvme_fabrics 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 42276 ']' 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 42276 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 42276 ']' 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 42276 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 42276 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:24.541 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:24.542 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 42276' 00:13:24.542 killing process with pid 42276 00:13:24.542 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 42276 00:13:24.542 14:11:51 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 42276 00:13:24.799 14:11:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.799 14:11:52 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:24.799 00:13:24.799 real 0m3.977s 00:13:24.799 user 0m6.354s 00:13:24.799 sys 0m2.171s 00:13:24.799 14:11:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.799 14:11:52 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:24.799 ************************************ 00:13:24.799 END TEST nvmf_multitarget 00:13:24.799 ************************************ 00:13:24.799 14:11:52 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:13:24.799 14:11:52 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:24.799 14:11:52 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.799 14:11:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:24.799 ************************************ 00:13:24.799 START TEST nvmf_rpc 00:13:24.799 ************************************ 00:13:24.799 14:11:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:13:25.058 * Looking for test storage... 00:13:25.058 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.058 14:11:52 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.059 14:11:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:13:27.590 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:13:27.590 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:13:27.590 Found net devices under 0000:81:00.0: mlx_0_0 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:13:27.590 Found net devices under 0000:81:00.1: mlx_0_1 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:13:27.590 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:27.591 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:27.591 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:13:27.591 altname enp129s0f0np0 00:13:27.591 inet 192.168.100.8/24 scope global mlx_0_0 00:13:27.591 valid_lft forever preferred_lft forever 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:27.591 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:27.591 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:13:27.591 altname enp129s0f1np1 00:13:27.591 inet 192.168.100.9/24 scope global mlx_0_1 00:13:27.591 valid_lft forever preferred_lft forever 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:27.591 192.168.100.9' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:27.591 192.168.100.9' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:27.591 192.168.100.9' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=44453 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 44453 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 44453 ']' 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:27.591 14:11:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.591 [2024-07-24 14:11:54.847030] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:13:27.591 [2024-07-24 14:11:54.847118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.591 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.591 [2024-07-24 14:11:54.920565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.849 [2024-07-24 14:11:55.013132] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.849 [2024-07-24 14:11:55.013197] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.849 [2024-07-24 14:11:55.013230] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.849 [2024-07-24 14:11:55.013244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.849 [2024-07-24 14:11:55.013257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.849 [2024-07-24 14:11:55.013340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.849 [2024-07-24 14:11:55.013396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.849 [2024-07-24 14:11:55.013449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.849 [2024-07-24 14:11:55.013452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.849 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:27.849 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:27.849 14:11:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:27.849 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.849 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:27.850 "tick_rate": 2700000000, 00:13:27.850 "poll_groups": [ 00:13:27.850 { 00:13:27.850 "name": "nvmf_tgt_poll_group_000", 00:13:27.850 "admin_qpairs": 0, 00:13:27.850 "io_qpairs": 0, 00:13:27.850 "current_admin_qpairs": 0, 00:13:27.850 "current_io_qpairs": 0, 00:13:27.850 "pending_bdev_io": 0, 00:13:27.850 "completed_nvme_io": 0, 00:13:27.850 "transports": [] 00:13:27.850 }, 00:13:27.850 { 00:13:27.850 "name": "nvmf_tgt_poll_group_001", 00:13:27.850 "admin_qpairs": 0, 00:13:27.850 "io_qpairs": 0, 00:13:27.850 "current_admin_qpairs": 0, 00:13:27.850 "current_io_qpairs": 0, 00:13:27.850 "pending_bdev_io": 0, 00:13:27.850 "completed_nvme_io": 0, 00:13:27.850 "transports": [] 00:13:27.850 }, 00:13:27.850 { 00:13:27.850 "name": "nvmf_tgt_poll_group_002", 00:13:27.850 "admin_qpairs": 0, 00:13:27.850 "io_qpairs": 0, 00:13:27.850 "current_admin_qpairs": 0, 00:13:27.850 "current_io_qpairs": 0, 00:13:27.850 "pending_bdev_io": 0, 00:13:27.850 "completed_nvme_io": 0, 00:13:27.850 "transports": [] 00:13:27.850 }, 00:13:27.850 { 00:13:27.850 "name": "nvmf_tgt_poll_group_003", 00:13:27.850 "admin_qpairs": 0, 00:13:27.850 "io_qpairs": 0, 00:13:27.850 "current_admin_qpairs": 0, 00:13:27.850 "current_io_qpairs": 0, 00:13:27.850 "pending_bdev_io": 0, 00:13:27.850 "completed_nvme_io": 0, 00:13:27.850 "transports": [] 00:13:27.850 } 00:13:27.850 ] 00:13:27.850 }' 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:27.850 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.108 [2024-07-24 14:11:55.288381] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc7ca40/0xc80f30) succeed. 00:13:28.108 [2024-07-24 14:11:55.299299] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc7e030/0xcc25c0) succeed. 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.108 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:28.108 "tick_rate": 2700000000, 00:13:28.108 "poll_groups": [ 00:13:28.108 { 00:13:28.108 "name": "nvmf_tgt_poll_group_000", 00:13:28.108 "admin_qpairs": 0, 00:13:28.108 "io_qpairs": 0, 00:13:28.108 "current_admin_qpairs": 0, 00:13:28.108 "current_io_qpairs": 0, 00:13:28.108 "pending_bdev_io": 0, 00:13:28.108 "completed_nvme_io": 0, 00:13:28.108 "transports": [ 00:13:28.108 { 00:13:28.108 "trtype": "RDMA", 00:13:28.108 "pending_data_buffer": 0, 00:13:28.108 "devices": [ 00:13:28.108 { 00:13:28.108 "name": "mlx5_0", 00:13:28.108 "polls": 21645, 00:13:28.108 "idle_polls": 21645, 00:13:28.108 "completions": 0, 00:13:28.108 "requests": 0, 00:13:28.108 "request_latency": 0, 00:13:28.108 "pending_free_request": 0, 00:13:28.108 "pending_rdma_read": 0, 00:13:28.108 "pending_rdma_write": 0, 00:13:28.108 "pending_rdma_send": 0, 00:13:28.108 "total_send_wrs": 0, 00:13:28.108 "send_doorbell_updates": 0, 00:13:28.108 "total_recv_wrs": 4096, 00:13:28.108 "recv_doorbell_updates": 1 00:13:28.108 }, 00:13:28.108 { 00:13:28.108 "name": "mlx5_1", 00:13:28.108 "polls": 21645, 00:13:28.108 "idle_polls": 21645, 00:13:28.108 "completions": 0, 00:13:28.108 "requests": 0, 00:13:28.108 "request_latency": 0, 00:13:28.108 "pending_free_request": 0, 00:13:28.108 "pending_rdma_read": 0, 00:13:28.108 "pending_rdma_write": 0, 00:13:28.108 "pending_rdma_send": 0, 00:13:28.108 "total_send_wrs": 0, 00:13:28.108 "send_doorbell_updates": 0, 00:13:28.108 "total_recv_wrs": 4096, 00:13:28.108 "recv_doorbell_updates": 1 00:13:28.108 } 00:13:28.108 ] 00:13:28.108 } 00:13:28.108 ] 00:13:28.108 }, 00:13:28.108 { 00:13:28.108 "name": "nvmf_tgt_poll_group_001", 00:13:28.108 "admin_qpairs": 0, 00:13:28.108 "io_qpairs": 0, 00:13:28.108 "current_admin_qpairs": 0, 00:13:28.108 "current_io_qpairs": 0, 00:13:28.108 "pending_bdev_io": 0, 00:13:28.108 "completed_nvme_io": 0, 00:13:28.108 "transports": [ 00:13:28.108 { 00:13:28.108 "trtype": "RDMA", 00:13:28.108 "pending_data_buffer": 0, 00:13:28.108 "devices": [ 00:13:28.108 { 00:13:28.108 "name": "mlx5_0", 00:13:28.108 "polls": 14777, 00:13:28.108 "idle_polls": 14777, 00:13:28.108 "completions": 0, 00:13:28.108 "requests": 0, 00:13:28.108 "request_latency": 0, 00:13:28.108 "pending_free_request": 0, 00:13:28.108 "pending_rdma_read": 0, 00:13:28.108 "pending_rdma_write": 0, 00:13:28.108 "pending_rdma_send": 0, 00:13:28.108 "total_send_wrs": 0, 00:13:28.108 "send_doorbell_updates": 0, 00:13:28.108 "total_recv_wrs": 4096, 00:13:28.108 "recv_doorbell_updates": 1 00:13:28.108 }, 00:13:28.108 { 00:13:28.108 "name": "mlx5_1", 00:13:28.108 "polls": 14777, 00:13:28.108 "idle_polls": 14777, 00:13:28.108 "completions": 0, 00:13:28.108 "requests": 0, 00:13:28.108 "request_latency": 0, 00:13:28.108 "pending_free_request": 0, 00:13:28.108 "pending_rdma_read": 0, 00:13:28.108 "pending_rdma_write": 0, 00:13:28.108 "pending_rdma_send": 0, 00:13:28.108 "total_send_wrs": 0, 00:13:28.108 "send_doorbell_updates": 0, 00:13:28.109 "total_recv_wrs": 4096, 00:13:28.109 "recv_doorbell_updates": 1 00:13:28.109 } 00:13:28.109 ] 00:13:28.109 } 00:13:28.109 ] 00:13:28.109 }, 00:13:28.109 { 00:13:28.109 "name": "nvmf_tgt_poll_group_002", 00:13:28.109 "admin_qpairs": 0, 00:13:28.109 "io_qpairs": 0, 00:13:28.109 "current_admin_qpairs": 0, 00:13:28.109 "current_io_qpairs": 0, 00:13:28.109 "pending_bdev_io": 0, 00:13:28.109 "completed_nvme_io": 0, 00:13:28.109 "transports": [ 00:13:28.109 { 00:13:28.109 "trtype": "RDMA", 00:13:28.109 "pending_data_buffer": 0, 00:13:28.109 "devices": [ 00:13:28.109 { 00:13:28.109 "name": "mlx5_0", 00:13:28.109 "polls": 7521, 00:13:28.109 "idle_polls": 7521, 00:13:28.109 "completions": 0, 00:13:28.109 "requests": 0, 00:13:28.109 "request_latency": 0, 00:13:28.109 "pending_free_request": 0, 00:13:28.109 "pending_rdma_read": 0, 00:13:28.109 "pending_rdma_write": 0, 00:13:28.109 "pending_rdma_send": 0, 00:13:28.109 "total_send_wrs": 0, 00:13:28.109 "send_doorbell_updates": 0, 00:13:28.109 "total_recv_wrs": 4096, 00:13:28.109 "recv_doorbell_updates": 1 00:13:28.109 }, 00:13:28.109 { 00:13:28.109 "name": "mlx5_1", 00:13:28.109 "polls": 7521, 00:13:28.109 "idle_polls": 7521, 00:13:28.109 "completions": 0, 00:13:28.109 "requests": 0, 00:13:28.109 "request_latency": 0, 00:13:28.109 "pending_free_request": 0, 00:13:28.109 "pending_rdma_read": 0, 00:13:28.109 "pending_rdma_write": 0, 00:13:28.109 "pending_rdma_send": 0, 00:13:28.109 "total_send_wrs": 0, 00:13:28.109 "send_doorbell_updates": 0, 00:13:28.109 "total_recv_wrs": 4096, 00:13:28.109 "recv_doorbell_updates": 1 00:13:28.109 } 00:13:28.109 ] 00:13:28.109 } 00:13:28.109 ] 00:13:28.109 }, 00:13:28.109 { 00:13:28.109 "name": "nvmf_tgt_poll_group_003", 00:13:28.109 "admin_qpairs": 0, 00:13:28.109 "io_qpairs": 0, 00:13:28.109 "current_admin_qpairs": 0, 00:13:28.109 "current_io_qpairs": 0, 00:13:28.109 "pending_bdev_io": 0, 00:13:28.109 "completed_nvme_io": 0, 00:13:28.109 "transports": [ 00:13:28.109 { 00:13:28.109 "trtype": "RDMA", 00:13:28.109 "pending_data_buffer": 0, 00:13:28.109 "devices": [ 00:13:28.109 { 00:13:28.109 "name": "mlx5_0", 00:13:28.109 "polls": 944, 00:13:28.109 "idle_polls": 944, 00:13:28.109 "completions": 0, 00:13:28.109 "requests": 0, 00:13:28.109 "request_latency": 0, 00:13:28.109 "pending_free_request": 0, 00:13:28.109 "pending_rdma_read": 0, 00:13:28.109 "pending_rdma_write": 0, 00:13:28.109 "pending_rdma_send": 0, 00:13:28.109 "total_send_wrs": 0, 00:13:28.109 "send_doorbell_updates": 0, 00:13:28.109 "total_recv_wrs": 4096, 00:13:28.109 "recv_doorbell_updates": 1 00:13:28.109 }, 00:13:28.109 { 00:13:28.109 "name": "mlx5_1", 00:13:28.109 "polls": 944, 00:13:28.109 "idle_polls": 944, 00:13:28.109 "completions": 0, 00:13:28.109 "requests": 0, 00:13:28.109 "request_latency": 0, 00:13:28.109 "pending_free_request": 0, 00:13:28.109 "pending_rdma_read": 0, 00:13:28.109 "pending_rdma_write": 0, 00:13:28.109 "pending_rdma_send": 0, 00:13:28.109 "total_send_wrs": 0, 00:13:28.109 "send_doorbell_updates": 0, 00:13:28.109 "total_recv_wrs": 4096, 00:13:28.109 "recv_doorbell_updates": 1 00:13:28.109 } 00:13:28.109 ] 00:13:28.109 } 00:13:28.109 ] 00:13:28.109 } 00:13:28.109 ] 00:13:28.109 }' 00:13:28.109 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:28.109 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.367 Malloc1 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.367 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.368 [2024-07-24 14:11:55.732707] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -s 4420 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -s 4420 00:13:28.368 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -s 4420 00:13:28.625 [2024-07-24 14:11:55.772443] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911' 00:13:28.625 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:28.625 could not add new controller: failed to write to nvme-fabrics device 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.625 14:11:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:30.027 14:11:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.027 14:11:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:30.027 14:11:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.027 14:11:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:30.027 14:11:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:31.922 14:11:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:31.922 14:11:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:31.922 14:11:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.922 14:11:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:31.922 14:11:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.922 14:11:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:31.922 14:11:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.853 14:12:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.853 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:32.853 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:32.853 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.853 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:32.853 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.853 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:32.854 [2024-07-24 14:12:00.155214] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911' 00:13:32.854 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:32.854 could not add new controller: failed to write to nvme-fabrics device 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.854 14:12:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:34.225 14:12:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.225 14:12:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:34.225 14:12:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.225 14:12:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:34.225 14:12:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:36.124 14:12:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:36.124 14:12:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:36.124 14:12:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.124 14:12:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:36.124 14:12:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.124 14:12:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:36.124 14:12:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.057 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.057 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:37.057 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:37.057 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.314 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.315 [2024-07-24 14:12:04.468476] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.315 14:12:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:38.685 14:12:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.685 14:12:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:38.685 14:12:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.685 14:12:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:38.685 14:12:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:40.578 14:12:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:40.578 14:12:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:40.578 14:12:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.578 14:12:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:40.578 14:12:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.578 14:12:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:40.578 14:12:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 [2024-07-24 14:12:08.749742] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.509 14:12:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:42.879 14:12:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.879 14:12:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:42.879 14:12:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.879 14:12:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:42.879 14:12:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:44.773 14:12:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:44.773 14:12:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:44.773 14:12:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.773 14:12:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:44.774 14:12:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.774 14:12:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:44.774 14:12:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.706 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.992 [2024-07-24 14:12:13.077461] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.992 14:12:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:46.922 14:12:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.922 14:12:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:46.922 14:12:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.922 14:12:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:46.922 14:12:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:49.445 14:12:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:49.445 14:12:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:49.445 14:12:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.445 14:12:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:49.445 14:12:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.445 14:12:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:49.445 14:12:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.010 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.267 [2024-07-24 14:12:17.386162] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.267 14:12:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:51.638 14:12:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.638 14:12:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:51.638 14:12:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.638 14:12:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:51.638 14:12:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:53.534 14:12:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:53.534 14:12:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:53.534 14:12:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.534 14:12:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:53.534 14:12:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.534 14:12:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:53.534 14:12:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.466 [2024-07-24 14:12:21.729011] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.466 14:12:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:55.838 14:12:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:55.838 14:12:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:55.838 14:12:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.838 14:12:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:55.838 14:12:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:57.735 14:12:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:57.735 14:12:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:57.735 14:12:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.735 14:12:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:57.735 14:12:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.735 14:12:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:57.735 14:12:24 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.668 14:12:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.668 14:12:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:58.668 14:12:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:58.668 14:12:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.668 14:12:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:58.668 14:12:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.668 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:58.668 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.668 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.668 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.668 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.669 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.927 [2024-07-24 14:12:26.039748] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.927 [2024-07-24 14:12:26.091896] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.927 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 [2024-07-24 14:12:26.140403] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 [2024-07-24 14:12:26.188917] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 [2024-07-24 14:12:26.237419] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.928 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.186 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.186 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:59.186 "tick_rate": 2700000000, 00:13:59.186 "poll_groups": [ 00:13:59.186 { 00:13:59.186 "name": "nvmf_tgt_poll_group_000", 00:13:59.186 "admin_qpairs": 2, 00:13:59.186 "io_qpairs": 27, 00:13:59.186 "current_admin_qpairs": 0, 00:13:59.186 "current_io_qpairs": 0, 00:13:59.186 "pending_bdev_io": 0, 00:13:59.186 "completed_nvme_io": 77, 00:13:59.186 "transports": [ 00:13:59.186 { 00:13:59.186 "trtype": "RDMA", 00:13:59.186 "pending_data_buffer": 0, 00:13:59.186 "devices": [ 00:13:59.186 { 00:13:59.186 "name": "mlx5_0", 00:13:59.186 "polls": 3985410, 00:13:59.186 "idle_polls": 3985160, 00:13:59.186 "completions": 271, 00:13:59.186 "requests": 135, 00:13:59.186 "request_latency": 34467249, 00:13:59.186 "pending_free_request": 0, 00:13:59.186 "pending_rdma_read": 0, 00:13:59.186 "pending_rdma_write": 0, 00:13:59.186 "pending_rdma_send": 0, 00:13:59.186 "total_send_wrs": 213, 00:13:59.186 "send_doorbell_updates": 123, 00:13:59.186 "total_recv_wrs": 4231, 00:13:59.186 "recv_doorbell_updates": 123 00:13:59.186 }, 00:13:59.186 { 00:13:59.186 "name": "mlx5_1", 00:13:59.186 "polls": 3985410, 00:13:59.186 "idle_polls": 3985410, 00:13:59.186 "completions": 0, 00:13:59.186 "requests": 0, 00:13:59.186 "request_latency": 0, 00:13:59.186 "pending_free_request": 0, 00:13:59.186 "pending_rdma_read": 0, 00:13:59.186 "pending_rdma_write": 0, 00:13:59.186 "pending_rdma_send": 0, 00:13:59.186 "total_send_wrs": 0, 00:13:59.186 "send_doorbell_updates": 0, 00:13:59.186 "total_recv_wrs": 4096, 00:13:59.186 "recv_doorbell_updates": 1 00:13:59.186 } 00:13:59.186 ] 00:13:59.186 } 00:13:59.186 ] 00:13:59.186 }, 00:13:59.186 { 00:13:59.186 "name": "nvmf_tgt_poll_group_001", 00:13:59.186 "admin_qpairs": 2, 00:13:59.186 "io_qpairs": 26, 00:13:59.186 "current_admin_qpairs": 0, 00:13:59.186 "current_io_qpairs": 0, 00:13:59.186 "pending_bdev_io": 0, 00:13:59.186 "completed_nvme_io": 127, 00:13:59.186 "transports": [ 00:13:59.186 { 00:13:59.186 "trtype": "RDMA", 00:13:59.186 "pending_data_buffer": 0, 00:13:59.186 "devices": [ 00:13:59.186 { 00:13:59.186 "name": "mlx5_0", 00:13:59.186 "polls": 4064203, 00:13:59.186 "idle_polls": 4063876, 00:13:59.186 "completions": 368, 00:13:59.186 "requests": 184, 00:13:59.186 "request_latency": 54547668, 00:13:59.186 "pending_free_request": 0, 00:13:59.186 "pending_rdma_read": 0, 00:13:59.186 "pending_rdma_write": 0, 00:13:59.186 "pending_rdma_send": 0, 00:13:59.186 "total_send_wrs": 312, 00:13:59.186 "send_doorbell_updates": 160, 00:13:59.186 "total_recv_wrs": 4280, 00:13:59.186 "recv_doorbell_updates": 161 00:13:59.186 }, 00:13:59.186 { 00:13:59.186 "name": "mlx5_1", 00:13:59.186 "polls": 4064203, 00:13:59.186 "idle_polls": 4064203, 00:13:59.186 "completions": 0, 00:13:59.186 "requests": 0, 00:13:59.186 "request_latency": 0, 00:13:59.186 "pending_free_request": 0, 00:13:59.186 "pending_rdma_read": 0, 00:13:59.186 "pending_rdma_write": 0, 00:13:59.186 "pending_rdma_send": 0, 00:13:59.186 "total_send_wrs": 0, 00:13:59.186 "send_doorbell_updates": 0, 00:13:59.187 "total_recv_wrs": 4096, 00:13:59.187 "recv_doorbell_updates": 1 00:13:59.187 } 00:13:59.187 ] 00:13:59.187 } 00:13:59.187 ] 00:13:59.187 }, 00:13:59.187 { 00:13:59.187 "name": "nvmf_tgt_poll_group_002", 00:13:59.187 "admin_qpairs": 1, 00:13:59.187 "io_qpairs": 26, 00:13:59.187 "current_admin_qpairs": 0, 00:13:59.187 "current_io_qpairs": 0, 00:13:59.187 "pending_bdev_io": 0, 00:13:59.187 "completed_nvme_io": 126, 00:13:59.187 "transports": [ 00:13:59.187 { 00:13:59.187 "trtype": "RDMA", 00:13:59.187 "pending_data_buffer": 0, 00:13:59.187 "devices": [ 00:13:59.187 { 00:13:59.187 "name": "mlx5_0", 00:13:59.187 "polls": 4089244, 00:13:59.187 "idle_polls": 4088971, 00:13:59.187 "completions": 311, 00:13:59.187 "requests": 155, 00:13:59.187 "request_latency": 50490999, 00:13:59.187 "pending_free_request": 0, 00:13:59.187 "pending_rdma_read": 0, 00:13:59.187 "pending_rdma_write": 0, 00:13:59.187 "pending_rdma_send": 0, 00:13:59.187 "total_send_wrs": 270, 00:13:59.187 "send_doorbell_updates": 132, 00:13:59.187 "total_recv_wrs": 4251, 00:13:59.187 "recv_doorbell_updates": 132 00:13:59.187 }, 00:13:59.187 { 00:13:59.187 "name": "mlx5_1", 00:13:59.187 "polls": 4089244, 00:13:59.187 "idle_polls": 4089244, 00:13:59.187 "completions": 0, 00:13:59.187 "requests": 0, 00:13:59.187 "request_latency": 0, 00:13:59.187 "pending_free_request": 0, 00:13:59.187 "pending_rdma_read": 0, 00:13:59.187 "pending_rdma_write": 0, 00:13:59.187 "pending_rdma_send": 0, 00:13:59.187 "total_send_wrs": 0, 00:13:59.187 "send_doorbell_updates": 0, 00:13:59.187 "total_recv_wrs": 4096, 00:13:59.187 "recv_doorbell_updates": 1 00:13:59.187 } 00:13:59.187 ] 00:13:59.187 } 00:13:59.187 ] 00:13:59.187 }, 00:13:59.187 { 00:13:59.187 "name": "nvmf_tgt_poll_group_003", 00:13:59.187 "admin_qpairs": 2, 00:13:59.187 "io_qpairs": 26, 00:13:59.187 "current_admin_qpairs": 0, 00:13:59.187 "current_io_qpairs": 0, 00:13:59.187 "pending_bdev_io": 0, 00:13:59.187 "completed_nvme_io": 125, 00:13:59.187 "transports": [ 00:13:59.187 { 00:13:59.187 "trtype": "RDMA", 00:13:59.187 "pending_data_buffer": 0, 00:13:59.187 "devices": [ 00:13:59.187 { 00:13:59.187 "name": "mlx5_0", 00:13:59.187 "polls": 2972047, 00:13:59.187 "idle_polls": 2971729, 00:13:59.187 "completions": 364, 00:13:59.187 "requests": 182, 00:13:59.187 "request_latency": 56738571, 00:13:59.187 "pending_free_request": 0, 00:13:59.187 "pending_rdma_read": 0, 00:13:59.187 "pending_rdma_write": 0, 00:13:59.187 "pending_rdma_send": 0, 00:13:59.187 "total_send_wrs": 309, 00:13:59.187 "send_doorbell_updates": 157, 00:13:59.187 "total_recv_wrs": 4278, 00:13:59.187 "recv_doorbell_updates": 158 00:13:59.187 }, 00:13:59.187 { 00:13:59.187 "name": "mlx5_1", 00:13:59.187 "polls": 2972047, 00:13:59.187 "idle_polls": 2972047, 00:13:59.187 "completions": 0, 00:13:59.187 "requests": 0, 00:13:59.187 "request_latency": 0, 00:13:59.187 "pending_free_request": 0, 00:13:59.187 "pending_rdma_read": 0, 00:13:59.187 "pending_rdma_write": 0, 00:13:59.187 "pending_rdma_send": 0, 00:13:59.187 "total_send_wrs": 0, 00:13:59.187 "send_doorbell_updates": 0, 00:13:59.187 "total_recv_wrs": 4096, 00:13:59.187 "recv_doorbell_updates": 1 00:13:59.187 } 00:13:59.187 ] 00:13:59.187 } 00:13:59.187 ] 00:13:59.187 } 00:13:59.187 ] 00:13:59.187 }' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1314 > 0 )) 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 196244487 > 0 )) 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:59.187 rmmod nvme_rdma 00:13:59.187 rmmod nvme_fabrics 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 44453 ']' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 44453 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 44453 ']' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 44453 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 44453 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 44453' 00:13:59.187 killing process with pid 44453 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 44453 00:13:59.187 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 44453 00:13:59.753 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.753 14:12:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:59.753 00:13:59.753 real 0m34.724s 00:13:59.753 user 2m7.556s 00:13:59.753 sys 0m3.272s 00:13:59.753 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.753 14:12:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.753 ************************************ 00:13:59.753 END TEST nvmf_rpc 00:13:59.753 ************************************ 00:13:59.753 14:12:26 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:59.753 14:12:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:59.753 14:12:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.753 14:12:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:59.753 ************************************ 00:13:59.753 START TEST nvmf_invalid 00:13:59.753 ************************************ 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:59.754 * Looking for test storage... 00:13:59.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.754 14:12:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:14:02.285 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:14:02.285 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:14:02.285 Found net devices under 0000:81:00.0: mlx_0_0 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:14:02.285 Found net devices under 0000:81:00.1: mlx_0_1 00:14:02.285 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:02.286 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:02.286 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:14:02.286 altname enp129s0f0np0 00:14:02.286 inet 192.168.100.8/24 scope global mlx_0_0 00:14:02.286 valid_lft forever preferred_lft forever 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:02.286 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:02.286 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:14:02.286 altname enp129s0f1np1 00:14:02.286 inet 192.168.100.9/24 scope global mlx_0_1 00:14:02.286 valid_lft forever preferred_lft forever 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:02.286 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:02.287 192.168.100.9' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:02.287 192.168.100.9' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:02.287 192.168.100.9' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=51036 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 51036 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 51036 ']' 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.287 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.287 [2024-07-24 14:12:29.557211] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:02.287 [2024-07-24 14:12:29.557280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.287 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.287 [2024-07-24 14:12:29.629995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.545 [2024-07-24 14:12:29.720423] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.545 [2024-07-24 14:12:29.720488] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.545 [2024-07-24 14:12:29.720505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.545 [2024-07-24 14:12:29.720520] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.545 [2024-07-24 14:12:29.720531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.545 [2024-07-24 14:12:29.720617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.545 [2024-07-24 14:12:29.720689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.545 [2024-07-24 14:12:29.720796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.545 [2024-07-24 14:12:29.720800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:02.545 14:12:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6435 00:14:02.835 [2024-07-24 14:12:30.151446] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:02.835 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:02.835 { 00:14:02.835 "nqn": "nqn.2016-06.io.spdk:cnode6435", 00:14:02.835 "tgt_name": "foobar", 00:14:02.835 "method": "nvmf_create_subsystem", 00:14:02.835 "req_id": 1 00:14:02.835 } 00:14:02.835 Got JSON-RPC error response 00:14:02.835 response: 00:14:02.835 { 00:14:02.835 "code": -32603, 00:14:02.835 "message": "Unable to find target foobar" 00:14:02.835 }' 00:14:02.835 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:02.835 { 00:14:02.835 "nqn": "nqn.2016-06.io.spdk:cnode6435", 00:14:02.835 "tgt_name": "foobar", 00:14:02.835 "method": "nvmf_create_subsystem", 00:14:02.835 "req_id": 1 00:14:02.835 } 00:14:02.835 Got JSON-RPC error response 00:14:02.835 response: 00:14:02.835 { 00:14:02.835 "code": -32603, 00:14:02.835 "message": "Unable to find target foobar" 00:14:02.835 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:03.097 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:03.097 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6762 00:14:03.097 [2024-07-24 14:12:30.452444] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6762: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:03.355 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:03.355 { 00:14:03.355 "nqn": "nqn.2016-06.io.spdk:cnode6762", 00:14:03.355 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.355 "method": "nvmf_create_subsystem", 00:14:03.355 "req_id": 1 00:14:03.355 } 00:14:03.355 Got JSON-RPC error response 00:14:03.355 response: 00:14:03.355 { 00:14:03.355 "code": -32602, 00:14:03.355 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.355 }' 00:14:03.355 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:03.355 { 00:14:03.355 "nqn": "nqn.2016-06.io.spdk:cnode6762", 00:14:03.355 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.355 "method": "nvmf_create_subsystem", 00:14:03.355 "req_id": 1 00:14:03.355 } 00:14:03.355 Got JSON-RPC error response 00:14:03.355 response: 00:14:03.355 { 00:14:03.355 "code": -32602, 00:14:03.355 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.355 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.355 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:03.355 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5120 00:14:03.355 [2024-07-24 14:12:30.725326] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5120: invalid model number 'SPDK_Controller' 00:14:03.613 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:03.614 { 00:14:03.614 "nqn": "nqn.2016-06.io.spdk:cnode5120", 00:14:03.614 "model_number": "SPDK_Controller\u001f", 00:14:03.614 "method": "nvmf_create_subsystem", 00:14:03.614 "req_id": 1 00:14:03.614 } 00:14:03.614 Got JSON-RPC error response 00:14:03.614 response: 00:14:03.614 { 00:14:03.614 "code": -32602, 00:14:03.614 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.614 }' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:03.614 { 00:14:03.614 "nqn": "nqn.2016-06.io.spdk:cnode5120", 00:14:03.614 "model_number": "SPDK_Controller\u001f", 00:14:03.614 "method": "nvmf_create_subsystem", 00:14:03.614 "req_id": 1 00:14:03.614 } 00:14:03.614 Got JSON-RPC error response 00:14:03.614 response: 00:14:03.614 { 00:14:03.614 "code": -32602, 00:14:03.614 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.614 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:14:03.614 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'I)1;\w=36d8EVFjEd!1>\' 00:14:03.615 14:12:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'I)1;\w=36d8EVFjEd!1>\' nqn.2016-06.io.spdk:cnode4124 00:14:03.873 [2024-07-24 14:12:31.026337] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4124: invalid serial number 'I)1;\w=36d8EVFjEd!1>\' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:03.873 { 00:14:03.873 "nqn": "nqn.2016-06.io.spdk:cnode4124", 00:14:03.873 "serial_number": "I)1;\\w=36d8EVFjEd!1>\\", 00:14:03.873 "method": "nvmf_create_subsystem", 00:14:03.873 "req_id": 1 00:14:03.873 } 00:14:03.873 Got JSON-RPC error response 00:14:03.873 response: 00:14:03.873 { 00:14:03.873 "code": -32602, 00:14:03.873 "message": "Invalid SN I)1;\\w=36d8EVFjEd!1>\\" 00:14:03.873 }' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:03.873 { 00:14:03.873 "nqn": "nqn.2016-06.io.spdk:cnode4124", 00:14:03.873 "serial_number": "I)1;\\w=36d8EVFjEd!1>\\", 00:14:03.873 "method": "nvmf_create_subsystem", 00:14:03.873 "req_id": 1 00:14:03.873 } 00:14:03.873 Got JSON-RPC error response 00:14:03.873 response: 00:14:03.873 { 00:14:03.873 "code": -32602, 00:14:03.873 "message": "Invalid SN I)1;\\w=36d8EVFjEd!1>\\" 00:14:03.873 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:03.873 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.874 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:14:03.875 14:12:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '2h6c /dev/null' 00:14:06.970 14:12:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.970 14:12:34 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.970 14:12:34 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.970 14:12:34 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.970 14:12:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:14:09.499 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:14:09.499 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.499 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:14:09.499 Found net devices under 0000:81:00.0: mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:14:09.500 Found net devices under 0000:81:00.1: mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:09.500 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:09.500 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:14:09.500 altname enp129s0f0np0 00:14:09.500 inet 192.168.100.8/24 scope global mlx_0_0 00:14:09.500 valid_lft forever preferred_lft forever 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:09.500 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:09.500 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:14:09.500 altname enp129s0f1np1 00:14:09.500 inet 192.168.100.9/24 scope global mlx_0_1 00:14:09.500 valid_lft forever preferred_lft forever 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:09.500 192.168.100.9' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:09.500 192.168.100.9' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:09.500 192.168.100.9' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=53691 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 53691 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 53691 ']' 00:14:09.500 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.501 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.501 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.501 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.501 14:12:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:09.759 [2024-07-24 14:12:36.893283] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:09.759 [2024-07-24 14:12:36.893373] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.759 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.759 [2024-07-24 14:12:36.965836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.759 [2024-07-24 14:12:37.057047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.759 [2024-07-24 14:12:37.057114] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.759 [2024-07-24 14:12:37.057140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.759 [2024-07-24 14:12:37.057154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.759 [2024-07-24 14:12:37.057166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.759 [2024-07-24 14:12:37.057252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.759 [2024-07-24 14:12:37.057311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.759 [2024-07-24 14:12:37.057314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.017 [2024-07-24 14:12:37.225198] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc55220/0xc596d0) succeed. 00:14:10.017 [2024-07-24 14:12:37.235590] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc56770/0xc9ad60) succeed. 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.017 Malloc0 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.017 Delay0 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.017 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.275 [2024-07-24 14:12:37.407441] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.275 14:12:37 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:10.275 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.275 [2024-07-24 14:12:37.489794] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:12.802 Initializing NVMe Controllers 00:14:12.802 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:14:12.802 controller IO queue size 128 less than required 00:14:12.802 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:12.802 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:12.802 Initialization complete. Launching workers. 00:14:12.802 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 39029 00:14:12.802 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39090, failed to submit 62 00:14:12.802 success 39030, unsuccess 60, failed 0 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:12.802 rmmod nvme_rdma 00:14:12.802 rmmod nvme_fabrics 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 53691 ']' 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 53691 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 53691 ']' 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 53691 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 53691 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 53691' 00:14:12.802 killing process with pid 53691 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@965 -- # kill 53691 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@970 -- # wait 53691 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:12.802 00:14:12.802 real 0m5.794s 00:14:12.802 user 0m11.635s 00:14:12.802 sys 0m2.238s 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:12.802 14:12:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:12.802 ************************************ 00:14:12.802 END TEST nvmf_abort 00:14:12.802 ************************************ 00:14:12.802 14:12:40 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:14:12.802 14:12:40 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:12.802 14:12:40 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.802 14:12:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:12.802 ************************************ 00:14:12.802 START TEST nvmf_ns_hotplug_stress 00:14:12.802 ************************************ 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:14:12.802 * Looking for test storage... 00:14:12.802 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.802 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.803 14:12:40 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.330 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:14:15.331 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:14:15.331 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:14:15.331 Found net devices under 0000:81:00.0: mlx_0_0 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:14:15.331 Found net devices under 0000:81:00.1: mlx_0_1 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.331 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:15.332 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.332 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:14:15.332 altname enp129s0f0np0 00:14:15.332 inet 192.168.100.8/24 scope global mlx_0_0 00:14:15.332 valid_lft forever preferred_lft forever 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:15.332 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.332 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:14:15.332 altname enp129s0f1np1 00:14:15.332 inet 192.168.100.9/24 scope global mlx_0_1 00:14:15.332 valid_lft forever preferred_lft forever 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:15.332 192.168.100.9' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:15.332 192.168.100.9' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:15.332 192.168.100.9' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=56043 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 56043 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 56043 ']' 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:15.332 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.590 [2024-07-24 14:12:42.725033] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:14:15.590 [2024-07-24 14:12:42.725121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.590 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.590 [2024-07-24 14:12:42.792196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:15.590 [2024-07-24 14:12:42.879027] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.590 [2024-07-24 14:12:42.879086] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.590 [2024-07-24 14:12:42.879100] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.590 [2024-07-24 14:12:42.879112] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.590 [2024-07-24 14:12:42.879122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.590 [2024-07-24 14:12:42.879210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.590 [2024-07-24 14:12:42.879274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.590 [2024-07-24 14:12:42.879276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.847 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.847 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:14:15.847 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.847 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.847 14:12:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.847 14:12:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.847 14:12:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:15.847 14:12:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:16.104 [2024-07-24 14:12:43.302887] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd6c200/0xd706b0) succeed. 00:14:16.104 [2024-07-24 14:12:43.313372] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd6d750/0xdb1d40) succeed. 00:14:16.104 14:12:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:16.667 14:12:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:16.667 [2024-07-24 14:12:43.963251] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:16.667 14:12:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:16.924 14:12:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:17.182 Malloc0 00:14:17.182 14:12:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:17.439 Delay0 00:14:17.439 14:12:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.696 14:12:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:17.954 NULL1 00:14:17.954 14:12:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:18.245 14:12:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=56458 00:14:18.245 14:12:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:18.245 14:12:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:18.245 14:12:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.245 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.615 Read completed with error (sct=0, sc=11) 00:14:19.615 14:12:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:19.615 14:12:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:19.615 14:12:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:19.872 true 00:14:19.872 14:12:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:19.872 14:12:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 14:12:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.802 14:12:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:20.802 14:12:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:21.059 true 00:14:21.059 14:12:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:21.059 14:12:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.990 14:12:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:21.990 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:22.248 14:12:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:22.248 14:12:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:22.505 true 00:14:22.505 14:12:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:22.505 14:12:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 14:12:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.437 14:12:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:23.437 14:12:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:23.695 true 00:14:23.695 14:12:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:23.695 14:12:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.627 14:12:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:24.884 14:12:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:24.884 14:12:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:25.142 true 00:14:25.142 14:12:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:25.142 14:12:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 14:12:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:26.334 14:12:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:26.334 14:12:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:26.591 true 00:14:26.591 14:12:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:26.591 14:12:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.155 14:12:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:27.412 14:12:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:27.412 14:12:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:27.670 true 00:14:27.670 14:12:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:27.670 14:12:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.603 14:12:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.860 14:12:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:28.860 14:12:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:29.118 true 00:14:29.118 14:12:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:29.118 14:12:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 14:12:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:30.050 14:12:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:30.050 14:12:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:30.307 true 00:14:30.307 14:12:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:30.307 14:12:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.239 14:12:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.496 14:12:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:31.496 14:12:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:31.754 true 00:14:31.754 14:12:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:31.754 14:12:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.319 14:12:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.319 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.577 14:12:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:32.577 14:12:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:32.871 true 00:14:32.871 14:13:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:32.871 14:13:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:33.803 14:13:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:33.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:33.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:33.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:33.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:33.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:34.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:34.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:34.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:34.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:34.061 14:13:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:34.061 14:13:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:34.318 true 00:14:34.318 14:13:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:34.318 14:13:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 14:13:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.251 14:13:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:35.251 14:13:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:35.509 true 00:14:35.509 14:13:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:35.509 14:13:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 14:13:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:36.699 14:13:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:36.699 14:13:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:36.956 true 00:14:36.956 14:13:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:36.956 14:13:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 14:13:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:37.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.148 14:13:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:38.148 14:13:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:38.405 true 00:14:38.405 14:13:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:38.405 14:13:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.970 14:13:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.228 14:13:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:39.228 14:13:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:39.792 true 00:14:39.792 14:13:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:39.792 14:13:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.357 14:13:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.614 14:13:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:40.614 14:13:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:40.871 true 00:14:40.871 14:13:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:40.871 14:13:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:41.803 14:13:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:41.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:41.803 14:13:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:41.803 14:13:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:42.060 true 00:14:42.060 14:13:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:42.060 14:13:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.625 14:13:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.625 14:13:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:42.625 14:13:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:42.883 true 00:14:42.883 14:13:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:42.883 14:13:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.815 14:13:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.099 [2024-07-24 14:13:11.192324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.099 [2024-07-24 14:13:11.192423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.099 [2024-07-24 14:13:11.192471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.192969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.193945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.194961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.195760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.196960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.100 [2024-07-24 14:13:11.197946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.197998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.198980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.199986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.200955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.101 [2024-07-24 14:13:11.201399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.201998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.101 [2024-07-24 14:13:11.202813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.202868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.202916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.202972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.203963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.204901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.205955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.206983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.102 [2024-07-24 14:13:11.207659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.207706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.207750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.207827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.207881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.207931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.207982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.208950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.209984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.210963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 14:13:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:44.103 [2024-07-24 14:13:11.211610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.211924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 14:13:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:44.103 [2024-07-24 14:13:11.211971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.103 [2024-07-24 14:13:11.212943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.212992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.213949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.214998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.215978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.104 [2024-07-24 14:13:11.216968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.217977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.218969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.219959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.220897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.221961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.105 [2024-07-24 14:13:11.222362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.222975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.223977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.224965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.225991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.226984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.106 [2024-07-24 14:13:11.227663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.227711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.227757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.227823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.227888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.228984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.229955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.230956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.231954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.232964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.233010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.233059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.233128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.107 [2024-07-24 14:13:11.233178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.233992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.234979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.235962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.236895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.237989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.108 [2024-07-24 14:13:11.238545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.238589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.238816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.238882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.238930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.238979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.239964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.240995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.241981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.242993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.243039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.243088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.243153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.243198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.109 [2024-07-24 14:13:11.243241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.243967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.244997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.245764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.246975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.247947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.110 [2024-07-24 14:13:11.248697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.248745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.248818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.248873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.248921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.248967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.249957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.250996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.251970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.252988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.253998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.254050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.254097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.111 [2024-07-24 14:13:11.254157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.254963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.255980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.112 [2024-07-24 14:13:11.256537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.256950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.257953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.258965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.112 [2024-07-24 14:13:11.259494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.259988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.260978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.261981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.262994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.263967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.264015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.264067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.264130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.264195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.113 [2024-07-24 14:13:11.264244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.264992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.265998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.266999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.267982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.268863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.114 [2024-07-24 14:13:11.269579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.269978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.270961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.271968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.272961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.273973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.115 [2024-07-24 14:13:11.274981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.275957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.276956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.277994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.278962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.279999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.116 [2024-07-24 14:13:11.280418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.280967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.281997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.282949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.283991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.284989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.285956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.286008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.117 [2024-07-24 14:13:11.286055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.286997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.287980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.288875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.289995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.118 [2024-07-24 14:13:11.290843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.290908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.290959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.291998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.292962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.293953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.294967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.295979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.296027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.296080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.296157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.296234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.296300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.119 [2024-07-24 14:13:11.296523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.296994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.297950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.298991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.299830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.300974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.301982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.302035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.120 [2024-07-24 14:13:11.302114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.302965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.303979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.304958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.305954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.306951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.121 [2024-07-24 14:13:11.307653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.307702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.307748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.307827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.307880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.307930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.307980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.308969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.309984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.310958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.311998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.312799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.122 [2024-07-24 14:13:11.312856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.313035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.313124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.313182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.313229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.122 [2024-07-24 14:13:11.313279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.313990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.314960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.315961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.316991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.123 [2024-07-24 14:13:11.317552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.317980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.318977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.319965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.320958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.321974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.322967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.323018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.323066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.323130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.124 [2024-07-24 14:13:11.323178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.323750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.324966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.325950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.326959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.327980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.125 [2024-07-24 14:13:11.328681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.328727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.328801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.328854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.328906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.328955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.329995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.330958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.331962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.332986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.333967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.334019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.334085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.334140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.334188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.334238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.126 [2024-07-24 14:13:11.334284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.334952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.335979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.336953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.337978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.338953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.339009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.127 [2024-07-24 14:13:11.339057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.339984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.340999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.341915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.342995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.343917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.128 [2024-07-24 14:13:11.344554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.344971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.345973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.346968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.347961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.348990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.349984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.350033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.350099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.129 [2024-07-24 14:13:11.350149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.350973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.351954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.352975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.353958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.354935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.130 [2024-07-24 14:13:11.355754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.355832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.355885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.355937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.355989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.356972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.357968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.358951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.359962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.360986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.131 [2024-07-24 14:13:11.361392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.361969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.362972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.363976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.364978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.365871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.366076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.366143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.366204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.132 [2024-07-24 14:13:11.366253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.366962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.367806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.368979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.133 [2024-07-24 14:13:11.369625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.369997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.370965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.133 [2024-07-24 14:13:11.371747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.371819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.371870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.371918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.371966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.372985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.373996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.374951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.375998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.376969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.377175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.377227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.377278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.377326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.134 [2024-07-24 14:13:11.377375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.377993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.378957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.379986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.380974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.381981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.135 [2024-07-24 14:13:11.382960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.383992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.384994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.385979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.386995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.136 [2024-07-24 14:13:11.387491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.387542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.387591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.387641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.387687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.387731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.387804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.387868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.388992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.389959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.390969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.391964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.137 [2024-07-24 14:13:11.392997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.393960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.394988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.395998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.396972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.397973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.138 [2024-07-24 14:13:11.398609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.398662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.398709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.398759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.398847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.398917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.399984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.400991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.401946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.402952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.403973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.404022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.139 [2024-07-24 14:13:11.404073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.404983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.405986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.406983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.407964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.408993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.140 [2024-07-24 14:13:11.409572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.409625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.409673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.409735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.409826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.410989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.411937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.412988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.413971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.141 [2024-07-24 14:13:11.414377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.414983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.415969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.416989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.417989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.418964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.142 [2024-07-24 14:13:11.419947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.419998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.420981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.421978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.422980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.423949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.424975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.143 [2024-07-24 14:13:11.425498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.425958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.426999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.427970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.144 [2024-07-24 14:13:11.428316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.428993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.144 [2024-07-24 14:13:11.429728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.429807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.429860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.429913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.429979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.430990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.431985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.432961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.433952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.145 [2024-07-24 14:13:11.434417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.434988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.435983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.436970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.437974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.438949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.146 [2024-07-24 14:13:11.439006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.439975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.440910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.441976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.442956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.147 [2024-07-24 14:13:11.443819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.443878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.443928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.443979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.444972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.445953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.446992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.447994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.432 [2024-07-24 14:13:11.448669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.448717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.448766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.448829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.448879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.448930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.448983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.449977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.450974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.451976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 true 00:14:44.433 [2024-07-24 14:13:11.452608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.452972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.453872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.454953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.455975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.456997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.433 [2024-07-24 14:13:11.457896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.457947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.457999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.458957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.459996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.460957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.461999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.462864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.463948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.464992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.465973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.466979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 14:13:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:44.434 [2024-07-24 14:13:11.467081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 14:13:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.434 [2024-07-24 14:13:11.467196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.467949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.468000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.468050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.434 [2024-07-24 14:13:11.468126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.468955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.469961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.470974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.471983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.472994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.473813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.474992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.475989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.476955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.477003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.477055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.477122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.477171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.477222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.435 [2024-07-24 14:13:11.477272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.477986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.478951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.479953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.480970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.481974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.482962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.483990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.484861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.436 [2024-07-24 14:13:11.485089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.485950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.486847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.487946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.436 [2024-07-24 14:13:11.488003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.488968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.489949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.490983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.491959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.492998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.493993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.494970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.495987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.496999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.437 [2024-07-24 14:13:11.497413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.497967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.498978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.499809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.500963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.501953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.502986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.503996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.504998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.505048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.505110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.505162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.438 [2024-07-24 14:13:11.505210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.505952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.506975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.507969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.508976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.509957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.510972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.511999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.439 [2024-07-24 14:13:11.512646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.512694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.512761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.512839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.512909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.513983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.514941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.515988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.516952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.517983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.440 [2024-07-24 14:13:11.518499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.518683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.518742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.518815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.518866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.518914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.518962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.519982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.520966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.521945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.522955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.523985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.524958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.441 [2024-07-24 14:13:11.525712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.525760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.525833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.525902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.526966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.527984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.528970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.529956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.530997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.531959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.532999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.533053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.533117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.533165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.533210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.442 [2024-07-24 14:13:11.533410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.533952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.534955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.535975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.536951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.537953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.538999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.539964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.540950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.443 [2024-07-24 14:13:11.541605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.541648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.541698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.541745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.541817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.541869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.541919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.541969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.444 [2024-07-24 14:13:11.542595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.542946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.543970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.544955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.545974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.546989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.547943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.548985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.549976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.550031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.550108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.550159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.550209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.550258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.444 [2024-07-24 14:13:11.550309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.550977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.551995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.552975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.553953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.554973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.555982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.556978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.557996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.558998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.559995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.445 [2024-07-24 14:13:11.560635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.560685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.560740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.560810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.560876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.561952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.562971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.563974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.564966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.565987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.566973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.567984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.568958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.569918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.570166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.446 [2024-07-24 14:13:11.570218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.570980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.571890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.572974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.573968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.574951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.575979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.576966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.577958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.578947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.579986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.447 [2024-07-24 14:13:11.580852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.580901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.580948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.581974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.582934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.583993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.584858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.585997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.586985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.587971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.588983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.589981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.590998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.448 [2024-07-24 14:13:11.591623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.591673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.591727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.591775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.591846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.591899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.591950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.592973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.593971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.594998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.595835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.596999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.597982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.598953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.449 [2024-07-24 14:13:11.599707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.599991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.600976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.601025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.601074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.449 [2024-07-24 14:13:11.601137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.601955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.602974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.603993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.604919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.605979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.606884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.607965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.608993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.609972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.610969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.450 [2024-07-24 14:13:11.611894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.611948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.612956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.613993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.614997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.615929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.616993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.617880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.618989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.619989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.620955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.451 [2024-07-24 14:13:11.621539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.621754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.621830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.621880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.621931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.621977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.622992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.623998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.624983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.625995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.626952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.627969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.628843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.629951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.630943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.631997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.632045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.632110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.632161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.632210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.452 [2024-07-24 14:13:11.632261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.632972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.633980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.634951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.635972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.636996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.637971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.638971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.639909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.640968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.641955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.642015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.642063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.642129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.453 [2024-07-24 14:13:11.642176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.642989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.643995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.644978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.645989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.646977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.647969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.648987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.649994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.650931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.651978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.454 [2024-07-24 14:13:11.652571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.652637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.652686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.652751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.652990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.653973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.654961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.655986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.656993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.657981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.455 [2024-07-24 14:13:11.658228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.658947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.659967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.660983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.661929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.455 [2024-07-24 14:13:11.662940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.662988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.663984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.664990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.665999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.666965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.667973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.668999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.669975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.670940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.671986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.456 [2024-07-24 14:13:11.672442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.672882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.673975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.674951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.675990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.676953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.677985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.678989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.679974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.680967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.681964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.682969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.683019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.683076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.683141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.457 [2024-07-24 14:13:11.683191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.683807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.684980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.685964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.686993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.687990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.688040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.458 [2024-07-24 14:13:11.688088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.688972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.689018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.689071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.689134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.689182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.689241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 [2024-07-24 14:13:11.689291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.459 14:13:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.721 [2024-07-24 14:13:11.938470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.938983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.939970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.940965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.941023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.941069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.721 [2024-07-24 14:13:11.941121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.941925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.942987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.943874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.944995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.945966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.722 [2024-07-24 14:13:11.946521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.946988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.947998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.948954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.949988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.950957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.723 [2024-07-24 14:13:11.951899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.951953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.952993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.953990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.954980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.955987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.956986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.957033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.957083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.957133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.957203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.957249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.724 [2024-07-24 14:13:11.957300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 14:13:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:44.725 [2024-07-24 14:13:11.957449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 14:13:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:44.725 [2024-07-24 14:13:11.957817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.957969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.958972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.959841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.960947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.961984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.962035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.962110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.962160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.962210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.962257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.725 [2024-07-24 14:13:11.962304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.962962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.963975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.964992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.965969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.726 [2024-07-24 14:13:11.966881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.966931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.966978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.967967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.968994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.969996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:44.727 [2024-07-24 14:13:11.970774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.970960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.971972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.972025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.972074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.972125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.972172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.972225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.727 [2024-07-24 14:13:11.972278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.972986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.973942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.974984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.975988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.976991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.728 [2024-07-24 14:13:11.977732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.729 [2024-07-24 14:13:11.977955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:44.986 true 00:14:44.986 14:13:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:44.986 14:13:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.919 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.919 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:45.919 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:46.175 true 00:14:46.175 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:46.175 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.432 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.689 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:46.689 14:13:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:46.947 true 00:14:46.947 14:13:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:46.947 14:13:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 14:13:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:47.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:48.162 [2024-07-24 14:13:15.250389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.250971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.251953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.252962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.253815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.254976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.162 [2024-07-24 14:13:15.255426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.255472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.255527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.255580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.255627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.255835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.255890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.255958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.256965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.257964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.258965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.259995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.163 [2024-07-24 14:13:15.260860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.260907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.260957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.261994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.262898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.263996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.264955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.265995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.266043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.266108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.266157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.266205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.164 [2024-07-24 14:13:15.266251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.266973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.267951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 14:13:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:48.165 [2024-07-24 14:13:15.268608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.268917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 14:13:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:48.165 [2024-07-24 14:13:15.268963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.269938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.270981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.165 [2024-07-24 14:13:15.271555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.271598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.271648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.271692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.271928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.271978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.272980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.273960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.274964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.275972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.276020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.166 [2024-07-24 14:13:15.276066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.276976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.277967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.278842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.279998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.280988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.281047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.281108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.281157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.281202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.281251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.281300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.167 [2024-07-24 14:13:15.281350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.281955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.282966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.283951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.284979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.285953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.168 [2024-07-24 14:13:15.286942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.286989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.287959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.169 [2024-07-24 14:13:15.288024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.288975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.289797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.290988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.291954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.169 [2024-07-24 14:13:15.292362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.292966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.293962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.294981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.295985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.296894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.170 [2024-07-24 14:13:15.297646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.297692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.297737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.297811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.297875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.297925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.297976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.298981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.299993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.300977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.301966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.302015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.302062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.302129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.171 [2024-07-24 14:13:15.302191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.302966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.303956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.304979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.305982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.306981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.172 [2024-07-24 14:13:15.307517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.307560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.307612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.307824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.307874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.307938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.307988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.308984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.309963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.310964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.311968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.173 [2024-07-24 14:13:15.312914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.312964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.313990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.314851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.315990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.316985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.317957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.174 [2024-07-24 14:13:15.318501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.318551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.318733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.318808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.318863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.318916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.318964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.319985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.320992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.321980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.322974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.175 [2024-07-24 14:13:15.323373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.323976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.324955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.325986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.326954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.327959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.176 [2024-07-24 14:13:15.328886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.328932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.328983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.329963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.330979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.331950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.332986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.333951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.177 [2024-07-24 14:13:15.334391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.334978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.335969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.336967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.337964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.338964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.339950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.178 [2024-07-24 14:13:15.340001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.340996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.341984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.342982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.343961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.179 [2024-07-24 14:13:15.344264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.344970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.179 [2024-07-24 14:13:15.345659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.345708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.345764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.345845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.345895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.346958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.347869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.348972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.349941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.180 [2024-07-24 14:13:15.350550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.350978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.351969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.352989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.353980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.354996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.355978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.356025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.356094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.356153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.181 [2024-07-24 14:13:15.356201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.356999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.357995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.358955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.359991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.360983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.182 [2024-07-24 14:13:15.361922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.361971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.362944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.363959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.364987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.365991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.366978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.183 [2024-07-24 14:13:15.367488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.367975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.368959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.369987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.370969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.371987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.184 [2024-07-24 14:13:15.372641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.372685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.372729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.372803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.372855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.372908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.372956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.373939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.374952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.375961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.376978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.185 [2024-07-24 14:13:15.377553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.377720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.377768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.377865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.377917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.377966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.378979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.379971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.380987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.381958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.382917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.383187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.186 [2024-07-24 14:13:15.383238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.383982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.384846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.385955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.386974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.387963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.187 [2024-07-24 14:13:15.388610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.388671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.388721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.388786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.388845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.388895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.388944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.388990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.389979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.390951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.391971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.392952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.393936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.188 [2024-07-24 14:13:15.394142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.394975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.395984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.396956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.397957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.189 [2024-07-24 14:13:15.398683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.398729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.398802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.398854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.398903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.398960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.399964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.400961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.190 [2024-07-24 14:13:15.401313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.401968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.402965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.403952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.404001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.404053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.404114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.404162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.190 [2024-07-24 14:13:15.404209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.404954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.405965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.406999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.407966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.408982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.191 [2024-07-24 14:13:15.409651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.409699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.409758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.409849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.409897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.409952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.410983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.411953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.412961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.413938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.414980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.415026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.415076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.415142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.415188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.415233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.415287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.192 [2024-07-24 14:13:15.415341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.415906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.416992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.417955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.418997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.419965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.193 [2024-07-24 14:13:15.420933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.420988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.421952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.422953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.423962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.424962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.194 [2024-07-24 14:13:15.425990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.426876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.427983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.428830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.429984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.430928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.195 [2024-07-24 14:13:15.431568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.431989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.432997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.433960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.434998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.435998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.436964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.437016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.437063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.437134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.437182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.437228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.196 [2024-07-24 14:13:15.437274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.437980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.438971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.439926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.440990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.441956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.197 [2024-07-24 14:13:15.442686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.442726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.442788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.442848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.442899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.442952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.443975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.444975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.445975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.446950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.198 [2024-07-24 14:13:15.447660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.447706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.447752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.447823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.447874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.447926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.447972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.448974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.449991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.450983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.451989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.452968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.453017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.199 [2024-07-24 14:13:15.453089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.453980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.454965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.455980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.456962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.457942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.458001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.200 [2024-07-24 14:13:15.458192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.458256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.458305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.458352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.458403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.200 [2024-07-24 14:13:15.458448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.458957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.459857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.460958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.461979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.462987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.201 [2024-07-24 14:13:15.463956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.464983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.465970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.466989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.467971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.468908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.469127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.469187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.469233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.469281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.469328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.469375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.202 [2024-07-24 14:13:15.469419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.469961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.470975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.471974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.472959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.203 [2024-07-24 14:13:15.473876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.473923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.473977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.474963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.475977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.476956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.477822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.478997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.479045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.479105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.479151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.479193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.204 [2024-07-24 14:13:15.479232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.479965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.480961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.481985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.482968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.483953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.205 [2024-07-24 14:13:15.484889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.484943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.484995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.485975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.486977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.487963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.206 [2024-07-24 14:13:15.488968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.489969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.490956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.491971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.492983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.493960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.207 [2024-07-24 14:13:15.494385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.494976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.495994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.496972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.497999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.208 [2024-07-24 14:13:15.498497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.498998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.499967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.500819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.501975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.502993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.209 [2024-07-24 14:13:15.503996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.504998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.505981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.210 [2024-07-24 14:13:15.506843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.506895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.506945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.506994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.507986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.508975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.509990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.490 [2024-07-24 14:13:15.510889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.510944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.510994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.511796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.512984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.513889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.514990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.491 [2024-07-24 14:13:15.515699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.515986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.516035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.516085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.516147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.516194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.516240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.491 [2024-07-24 14:13:15.516288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.516970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.517982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 true 00:14:48.492 [2024-07-24 14:13:15.518769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.518951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.519986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.520984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.492 [2024-07-24 14:13:15.521912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.521961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.522896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.523987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.524952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.525966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.526799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.493 [2024-07-24 14:13:15.527527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.527951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.528990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.529976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.530977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.494 [2024-07-24 14:13:15.531962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.532960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.533956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.534971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 14:13:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:48.495 [2024-07-24 14:13:15.535815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.535977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 14:13:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.495 [2024-07-24 14:13:15.536207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.536999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.495 [2024-07-24 14:13:15.537513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.537955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.538972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.539978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.540961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.541990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.542983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.496 [2024-07-24 14:13:15.543031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.543971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.544999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.545975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.546900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.547980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.497 [2024-07-24 14:13:15.548568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.548613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.548670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.548737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.548974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.549957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.550955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.551955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.552963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.553953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.554007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.554059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.498 [2024-07-24 14:13:15.554120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.554968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.555913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.556951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.557976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.499 [2024-07-24 14:13:15.558724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.558785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.558843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.558891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.558935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.558991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.559974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.560991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.561970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.562957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.563981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.564030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.564082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.564143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.500 [2024-07-24 14:13:15.564190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.564902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.565969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.566956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.567986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.568976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.501 [2024-07-24 14:13:15.569578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.569985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.570969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.571990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.502 [2024-07-24 14:13:15.572102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.572994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.573989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.502 [2024-07-24 14:13:15.574982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.575968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.576963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.577971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.578967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.579955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.580003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.580051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.503 [2024-07-24 14:13:15.580095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.580934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.581953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.582988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.583990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.584809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.504 [2024-07-24 14:13:15.585395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.585996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.586997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.587977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.588980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.589993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.505 [2024-07-24 14:13:15.590738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.590811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.590865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.590915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.590961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.591883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.592958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.593955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.594959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.595988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.596036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.596088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.596152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.506 [2024-07-24 14:13:15.596203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.596993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.597991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.598993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.599951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.600866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.507 [2024-07-24 14:13:15.601623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.601673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.601722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.601766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.601838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.601891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.601940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.602964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.603989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.604999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.605996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.508 [2024-07-24 14:13:15.606500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.606546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.606729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.606801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.606851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.606898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.606947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.607995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.608996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.609958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.610962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.611982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.509 [2024-07-24 14:13:15.612039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.612988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.613958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.614997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.615966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.616953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.617018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.617066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.617160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.617333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.510 [2024-07-24 14:13:15.617398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.617971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.618939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.619971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.620987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.621980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.511 [2024-07-24 14:13:15.622686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.622732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.622805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.622855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.622904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.622952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.623985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.624962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.625951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.626990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.512 [2024-07-24 14:13:15.627748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.627970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.628023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.512 [2024-07-24 14:13:15.628069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.628955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.629987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.630981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.631956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.513 [2024-07-24 14:13:15.632444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.632949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.633994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.634827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.635969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.636992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.514 [2024-07-24 14:13:15.637910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.637959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.638964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.639953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.640955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.641861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.642976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.643028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.643078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.643138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.643187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.643232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.643286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.515 [2024-07-24 14:13:15.643336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.643992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.644961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.645957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.646954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.647969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.516 [2024-07-24 14:13:15.648810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.648863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.648917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.648967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.649957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.650871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.651965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.652996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.517 [2024-07-24 14:13:15.653548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.653997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.654955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.655959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.656952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.657975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.518 [2024-07-24 14:13:15.658918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.658968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.659928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.660956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.661998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.662977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.663956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.664003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.664051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.664098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.664161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.664213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.519 [2024-07-24 14:13:15.664259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.664931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.665991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.666855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.667950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.668994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.520 [2024-07-24 14:13:15.669626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.669670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.669716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.669765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.669844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.669892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.669944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.669992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.670993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.671994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.672992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.673962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.674009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.521 [2024-07-24 14:13:15.674188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.674993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.675977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.676968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.677999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.678045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.678109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.522 [2024-07-24 14:13:15.678157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.678977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.679989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.680976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.681994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:48.523 [2024-07-24 14:13:15.682931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.682997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.523 [2024-07-24 14:13:15.683457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.683968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.684995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.685963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.686965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.524 [2024-07-24 14:13:15.687938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.688968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 [2024-07-24 14:13:15.689381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:48.525 14:13:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.782 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:48.782 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:49.040 true 00:14:49.040 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:49.040 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.297 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.555 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:49.555 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:49.812 true 00:14:49.812 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:49.812 14:13:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.069 14:13:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.327 14:13:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:50.327 14:13:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:50.583 true 00:14:50.583 Initializing NVMe Controllers 00:14:50.583 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.583 Controller IO queue size 128, less than required. 00:14:50.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.583 Controller IO queue size 128, less than required. 00:14:50.583 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.583 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:50.583 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:50.583 Initialization complete. Launching workers. 00:14:50.583 ======================================================== 00:14:50.583 Latency(us) 00:14:50.583 Device Information : IOPS MiB/s Average min max 00:14:50.583 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7009.72 3.42 14757.02 1228.78 1171513.62 00:14:50.583 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 24383.76 11.91 5249.25 2700.82 388796.16 00:14:50.583 ======================================================== 00:14:50.583 Total : 31393.47 15.33 7372.20 1228.78 1171513.62 00:14:50.583 00:14:50.583 14:13:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:50.583 14:13:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.839 14:13:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.839 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:50.839 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:51.098 true 00:14:51.098 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 56458 00:14:51.098 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (56458) - No such process 00:14:51.098 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 56458 00:14:51.098 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.356 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:51.613 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:51.613 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:51.613 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:51.613 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:51.613 14:13:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:51.870 null0 00:14:51.870 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:51.870 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:51.870 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:52.128 null1 00:14:52.128 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:52.128 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:52.128 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:52.385 null2 00:14:52.385 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:52.385 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:52.385 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:52.643 null3 00:14:52.643 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:52.643 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:52.643 14:13:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:52.900 null4 00:14:52.900 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:52.900 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:52.900 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:53.158 null5 00:14:53.158 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.158 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.158 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:53.415 null6 00:14:53.415 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.415 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.415 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:53.673 null7 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.673 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 60637 60638 60640 60642 60644 60646 60648 60650 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:53.674 14:13:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:53.932 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.190 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:54.448 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.706 14:13:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:54.964 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.222 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.479 14:13:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.735 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.992 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.992 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.992 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.992 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.992 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.992 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.992 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.249 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.249 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.249 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.249 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.249 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.249 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.249 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.506 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.763 14:13:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.020 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.277 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.277 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.277 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.277 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.278 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.278 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.278 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.278 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.535 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.792 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.792 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.792 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.792 14:13:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.792 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.792 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.792 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.792 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.050 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.051 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.051 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.051 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.308 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.565 14:13:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.823 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:59.081 rmmod nvme_rdma 00:14:59.081 rmmod nvme_fabrics 00:14:59.081 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 56043 ']' 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 56043 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 56043 ']' 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 56043 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 56043 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 56043' 00:14:59.082 killing process with pid 56043 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 56043 00:14:59.082 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 56043 00:14:59.648 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.648 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:59.648 00:14:59.648 real 0m46.685s 00:14:59.648 user 3m41.439s 00:14:59.648 sys 0m12.191s 00:14:59.648 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:59.648 14:13:26 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.648 ************************************ 00:14:59.648 END TEST nvmf_ns_hotplug_stress 00:14:59.648 ************************************ 00:14:59.648 14:13:26 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:59.648 14:13:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:59.648 14:13:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:59.648 14:13:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:59.648 ************************************ 00:14:59.648 START TEST nvmf_connect_stress 00:14:59.648 ************************************ 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:14:59.648 * Looking for test storage... 00:14:59.648 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.648 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.649 14:13:26 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.177 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:15:02.178 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:15:02.178 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:15:02.178 Found net devices under 0000:81:00.0: mlx_0_0 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:15:02.178 Found net devices under 0000:81:00.1: mlx_0_1 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:02.178 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:02.179 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:02.179 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:15:02.179 altname enp129s0f0np0 00:15:02.179 inet 192.168.100.8/24 scope global mlx_0_0 00:15:02.179 valid_lft forever preferred_lft forever 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:02.179 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:02.179 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:15:02.179 altname enp129s0f1np1 00:15:02.179 inet 192.168.100.9/24 scope global mlx_0_1 00:15:02.179 valid_lft forever preferred_lft forever 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:02.179 192.168.100.9' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:02.179 192.168.100.9' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:02.179 192.168.100.9' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:02.179 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=63519 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 63519 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 63519 ']' 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:02.180 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.180 [2024-07-24 14:13:29.429471] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:02.180 [2024-07-24 14:13:29.429555] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.180 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.180 [2024-07-24 14:13:29.500594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.438 [2024-07-24 14:13:29.595985] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.438 [2024-07-24 14:13:29.596062] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.438 [2024-07-24 14:13:29.596079] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.438 [2024-07-24 14:13:29.596093] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.438 [2024-07-24 14:13:29.596105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.438 [2024-07-24 14:13:29.596206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.438 [2024-07-24 14:13:29.596303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.438 [2024-07-24 14:13:29.596305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:02.438 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.439 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.439 [2024-07-24 14:13:29.762988] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x974200/0x9786b0) succeed. 00:15:02.439 [2024-07-24 14:13:29.773528] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x975750/0x9b9d40) succeed. 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.697 [2024-07-24 14:13:29.900546] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.697 NULL1 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=63601 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.697 14:13:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.955 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.955 14:13:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:02.955 14:13:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:02.955 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.955 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.519 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.519 14:13:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:03.519 14:13:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.519 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.519 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:03.776 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.776 14:13:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:03.776 14:13:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:03.776 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.776 14:13:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.033 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.033 14:13:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:04.033 14:13:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.033 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.033 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.291 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.291 14:13:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:04.291 14:13:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.291 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.291 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.549 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.549 14:13:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:04.549 14:13:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.549 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.549 14:13:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.114 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.114 14:13:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:05.114 14:13:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.114 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.114 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.371 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.371 14:13:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:05.371 14:13:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.371 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.371 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.628 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.628 14:13:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:05.628 14:13:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.628 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.628 14:13:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.885 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.885 14:13:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:05.885 14:13:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.885 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.885 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.450 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.450 14:13:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:06.450 14:13:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.450 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.450 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.707 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.707 14:13:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:06.707 14:13:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.707 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.707 14:13:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.965 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.965 14:13:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:06.965 14:13:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.965 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.965 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.249 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.249 14:13:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:07.249 14:13:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.249 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.249 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.506 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.506 14:13:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:07.506 14:13:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.506 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.506 14:13:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.763 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.763 14:13:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:07.763 14:13:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.763 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.763 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.327 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.327 14:13:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:08.327 14:13:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.327 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.327 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.584 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.584 14:13:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:08.584 14:13:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.584 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.584 14:13:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.841 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.841 14:13:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:08.841 14:13:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.841 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.841 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.098 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.098 14:13:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:09.098 14:13:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.098 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.098 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.662 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.662 14:13:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:09.662 14:13:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.662 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.662 14:13:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.919 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.919 14:13:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:09.919 14:13:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.919 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.919 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.177 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.177 14:13:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:10.177 14:13:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.177 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.177 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.434 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.434 14:13:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:10.434 14:13:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.434 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.434 14:13:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.691 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.691 14:13:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:10.691 14:13:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.691 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.691 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.257 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.257 14:13:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:11.257 14:13:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.257 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.257 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.515 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.515 14:13:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:11.515 14:13:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.515 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.515 14:13:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.772 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.772 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:11.772 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.772 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.772 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.029 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.030 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:12.030 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.030 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.030 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.287 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.287 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:12.287 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.287 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.287 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.852 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.852 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:12.852 14:13:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.852 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.852 14:13:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.852 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 63601 00:15:13.110 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (63601) - No such process 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 63601 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:13.110 rmmod nvme_rdma 00:15:13.110 rmmod nvme_fabrics 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 63519 ']' 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 63519 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 63519 ']' 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 63519 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63519 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63519' 00:15:13.110 killing process with pid 63519 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 63519 00:15:13.110 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 63519 00:15:13.368 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.368 14:13:40 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:13.368 00:15:13.368 real 0m13.878s 00:15:13.368 user 0m39.375s 00:15:13.368 sys 0m3.781s 00:15:13.368 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:13.368 14:13:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.368 ************************************ 00:15:13.368 END TEST nvmf_connect_stress 00:15:13.368 ************************************ 00:15:13.368 14:13:40 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:15:13.368 14:13:40 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:13.368 14:13:40 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:13.368 14:13:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:13.368 ************************************ 00:15:13.368 START TEST nvmf_fused_ordering 00:15:13.368 ************************************ 00:15:13.368 14:13:40 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:15:13.626 * Looking for test storage... 00:15:13.626 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.626 14:13:40 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:15:16.156 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:15:16.156 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:15:16.156 Found net devices under 0000:81:00.0: mlx_0_0 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:15:16.156 Found net devices under 0000:81:00.1: mlx_0_1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:16.156 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:16.156 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:15:16.156 altname enp129s0f0np0 00:15:16.156 inet 192.168.100.8/24 scope global mlx_0_0 00:15:16.156 valid_lft forever preferred_lft forever 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:16.156 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:16.156 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:15:16.156 altname enp129s0f1np1 00:15:16.156 inet 192.168.100.9/24 scope global mlx_0_1 00:15:16.156 valid_lft forever preferred_lft forever 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:16.156 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:16.157 192.168.100.9' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:16.157 192.168.100.9' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:16.157 192.168.100.9' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=66819 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 66819 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 66819 ']' 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.157 [2024-07-24 14:13:43.207860] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:16.157 [2024-07-24 14:13:43.207945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.157 [2024-07-24 14:13:43.279857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.157 [2024-07-24 14:13:43.368454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.157 [2024-07-24 14:13:43.368520] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.157 [2024-07-24 14:13:43.368547] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.157 [2024-07-24 14:13:43.368561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.157 [2024-07-24 14:13:43.368572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.157 [2024-07-24 14:13:43.368604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.157 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.414 [2024-07-24 14:13:43.536359] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbb1c70/0xbb6120) succeed. 00:15:16.414 [2024-07-24 14:13:43.548340] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbb3120/0xbf77b0) succeed. 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.414 [2024-07-24 14:13:43.620616] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.414 NULL1 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.414 14:13:43 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:16.414 [2024-07-24 14:13:43.664703] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:16.414 [2024-07-24 14:13:43.664746] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66954 ] 00:15:16.414 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.672 Attached to nqn.2016-06.io.spdk:cnode1 00:15:16.672 Namespace ID: 1 size: 1GB 00:15:16.672 fused_ordering(0) 00:15:16.672 fused_ordering(1) 00:15:16.672 fused_ordering(2) 00:15:16.672 fused_ordering(3) 00:15:16.672 fused_ordering(4) 00:15:16.672 fused_ordering(5) 00:15:16.672 fused_ordering(6) 00:15:16.672 fused_ordering(7) 00:15:16.672 fused_ordering(8) 00:15:16.672 fused_ordering(9) 00:15:16.672 fused_ordering(10) 00:15:16.672 fused_ordering(11) 00:15:16.672 fused_ordering(12) 00:15:16.672 fused_ordering(13) 00:15:16.672 fused_ordering(14) 00:15:16.672 fused_ordering(15) 00:15:16.672 fused_ordering(16) 00:15:16.672 fused_ordering(17) 00:15:16.672 fused_ordering(18) 00:15:16.672 fused_ordering(19) 00:15:16.672 fused_ordering(20) 00:15:16.672 fused_ordering(21) 00:15:16.672 fused_ordering(22) 00:15:16.672 fused_ordering(23) 00:15:16.672 fused_ordering(24) 00:15:16.672 fused_ordering(25) 00:15:16.672 fused_ordering(26) 00:15:16.672 fused_ordering(27) 00:15:16.672 fused_ordering(28) 00:15:16.672 fused_ordering(29) 00:15:16.672 fused_ordering(30) 00:15:16.672 fused_ordering(31) 00:15:16.672 fused_ordering(32) 00:15:16.672 fused_ordering(33) 00:15:16.672 fused_ordering(34) 00:15:16.672 fused_ordering(35) 00:15:16.672 fused_ordering(36) 00:15:16.672 fused_ordering(37) 00:15:16.672 fused_ordering(38) 00:15:16.672 fused_ordering(39) 00:15:16.672 fused_ordering(40) 00:15:16.672 fused_ordering(41) 00:15:16.672 fused_ordering(42) 00:15:16.672 fused_ordering(43) 00:15:16.672 fused_ordering(44) 00:15:16.672 fused_ordering(45) 00:15:16.672 fused_ordering(46) 00:15:16.672 fused_ordering(47) 00:15:16.672 fused_ordering(48) 00:15:16.672 fused_ordering(49) 00:15:16.672 fused_ordering(50) 00:15:16.672 fused_ordering(51) 00:15:16.672 fused_ordering(52) 00:15:16.672 fused_ordering(53) 00:15:16.672 fused_ordering(54) 00:15:16.672 fused_ordering(55) 00:15:16.672 fused_ordering(56) 00:15:16.672 fused_ordering(57) 00:15:16.672 fused_ordering(58) 00:15:16.672 fused_ordering(59) 00:15:16.672 fused_ordering(60) 00:15:16.672 fused_ordering(61) 00:15:16.672 fused_ordering(62) 00:15:16.672 fused_ordering(63) 00:15:16.672 fused_ordering(64) 00:15:16.672 fused_ordering(65) 00:15:16.672 fused_ordering(66) 00:15:16.672 fused_ordering(67) 00:15:16.672 fused_ordering(68) 00:15:16.672 fused_ordering(69) 00:15:16.672 fused_ordering(70) 00:15:16.672 fused_ordering(71) 00:15:16.672 fused_ordering(72) 00:15:16.672 fused_ordering(73) 00:15:16.672 fused_ordering(74) 00:15:16.672 fused_ordering(75) 00:15:16.672 fused_ordering(76) 00:15:16.672 fused_ordering(77) 00:15:16.672 fused_ordering(78) 00:15:16.672 fused_ordering(79) 00:15:16.672 fused_ordering(80) 00:15:16.672 fused_ordering(81) 00:15:16.672 fused_ordering(82) 00:15:16.672 fused_ordering(83) 00:15:16.672 fused_ordering(84) 00:15:16.672 fused_ordering(85) 00:15:16.672 fused_ordering(86) 00:15:16.672 fused_ordering(87) 00:15:16.672 fused_ordering(88) 00:15:16.672 fused_ordering(89) 00:15:16.672 fused_ordering(90) 00:15:16.672 fused_ordering(91) 00:15:16.672 fused_ordering(92) 00:15:16.672 fused_ordering(93) 00:15:16.672 fused_ordering(94) 00:15:16.672 fused_ordering(95) 00:15:16.672 fused_ordering(96) 00:15:16.672 fused_ordering(97) 00:15:16.672 fused_ordering(98) 00:15:16.672 fused_ordering(99) 00:15:16.672 fused_ordering(100) 00:15:16.672 fused_ordering(101) 00:15:16.672 fused_ordering(102) 00:15:16.672 fused_ordering(103) 00:15:16.672 fused_ordering(104) 00:15:16.672 fused_ordering(105) 00:15:16.672 fused_ordering(106) 00:15:16.672 fused_ordering(107) 00:15:16.672 fused_ordering(108) 00:15:16.672 fused_ordering(109) 00:15:16.672 fused_ordering(110) 00:15:16.672 fused_ordering(111) 00:15:16.672 fused_ordering(112) 00:15:16.672 fused_ordering(113) 00:15:16.672 fused_ordering(114) 00:15:16.672 fused_ordering(115) 00:15:16.672 fused_ordering(116) 00:15:16.672 fused_ordering(117) 00:15:16.672 fused_ordering(118) 00:15:16.672 fused_ordering(119) 00:15:16.672 fused_ordering(120) 00:15:16.672 fused_ordering(121) 00:15:16.672 fused_ordering(122) 00:15:16.672 fused_ordering(123) 00:15:16.672 fused_ordering(124) 00:15:16.672 fused_ordering(125) 00:15:16.672 fused_ordering(126) 00:15:16.672 fused_ordering(127) 00:15:16.672 fused_ordering(128) 00:15:16.672 fused_ordering(129) 00:15:16.672 fused_ordering(130) 00:15:16.672 fused_ordering(131) 00:15:16.672 fused_ordering(132) 00:15:16.672 fused_ordering(133) 00:15:16.672 fused_ordering(134) 00:15:16.672 fused_ordering(135) 00:15:16.672 fused_ordering(136) 00:15:16.672 fused_ordering(137) 00:15:16.672 fused_ordering(138) 00:15:16.672 fused_ordering(139) 00:15:16.672 fused_ordering(140) 00:15:16.672 fused_ordering(141) 00:15:16.672 fused_ordering(142) 00:15:16.672 fused_ordering(143) 00:15:16.672 fused_ordering(144) 00:15:16.672 fused_ordering(145) 00:15:16.672 fused_ordering(146) 00:15:16.672 fused_ordering(147) 00:15:16.672 fused_ordering(148) 00:15:16.672 fused_ordering(149) 00:15:16.672 fused_ordering(150) 00:15:16.672 fused_ordering(151) 00:15:16.672 fused_ordering(152) 00:15:16.672 fused_ordering(153) 00:15:16.672 fused_ordering(154) 00:15:16.672 fused_ordering(155) 00:15:16.672 fused_ordering(156) 00:15:16.672 fused_ordering(157) 00:15:16.672 fused_ordering(158) 00:15:16.672 fused_ordering(159) 00:15:16.672 fused_ordering(160) 00:15:16.673 fused_ordering(161) 00:15:16.673 fused_ordering(162) 00:15:16.673 fused_ordering(163) 00:15:16.673 fused_ordering(164) 00:15:16.673 fused_ordering(165) 00:15:16.673 fused_ordering(166) 00:15:16.673 fused_ordering(167) 00:15:16.673 fused_ordering(168) 00:15:16.673 fused_ordering(169) 00:15:16.673 fused_ordering(170) 00:15:16.673 fused_ordering(171) 00:15:16.673 fused_ordering(172) 00:15:16.673 fused_ordering(173) 00:15:16.673 fused_ordering(174) 00:15:16.673 fused_ordering(175) 00:15:16.673 fused_ordering(176) 00:15:16.673 fused_ordering(177) 00:15:16.673 fused_ordering(178) 00:15:16.673 fused_ordering(179) 00:15:16.673 fused_ordering(180) 00:15:16.673 fused_ordering(181) 00:15:16.673 fused_ordering(182) 00:15:16.673 fused_ordering(183) 00:15:16.673 fused_ordering(184) 00:15:16.673 fused_ordering(185) 00:15:16.673 fused_ordering(186) 00:15:16.673 fused_ordering(187) 00:15:16.673 fused_ordering(188) 00:15:16.673 fused_ordering(189) 00:15:16.673 fused_ordering(190) 00:15:16.673 fused_ordering(191) 00:15:16.673 fused_ordering(192) 00:15:16.673 fused_ordering(193) 00:15:16.673 fused_ordering(194) 00:15:16.673 fused_ordering(195) 00:15:16.673 fused_ordering(196) 00:15:16.673 fused_ordering(197) 00:15:16.673 fused_ordering(198) 00:15:16.673 fused_ordering(199) 00:15:16.673 fused_ordering(200) 00:15:16.673 fused_ordering(201) 00:15:16.673 fused_ordering(202) 00:15:16.673 fused_ordering(203) 00:15:16.673 fused_ordering(204) 00:15:16.673 fused_ordering(205) 00:15:16.673 fused_ordering(206) 00:15:16.673 fused_ordering(207) 00:15:16.673 fused_ordering(208) 00:15:16.673 fused_ordering(209) 00:15:16.673 fused_ordering(210) 00:15:16.673 fused_ordering(211) 00:15:16.673 fused_ordering(212) 00:15:16.673 fused_ordering(213) 00:15:16.673 fused_ordering(214) 00:15:16.673 fused_ordering(215) 00:15:16.673 fused_ordering(216) 00:15:16.673 fused_ordering(217) 00:15:16.673 fused_ordering(218) 00:15:16.673 fused_ordering(219) 00:15:16.673 fused_ordering(220) 00:15:16.673 fused_ordering(221) 00:15:16.673 fused_ordering(222) 00:15:16.673 fused_ordering(223) 00:15:16.673 fused_ordering(224) 00:15:16.673 fused_ordering(225) 00:15:16.673 fused_ordering(226) 00:15:16.673 fused_ordering(227) 00:15:16.673 fused_ordering(228) 00:15:16.673 fused_ordering(229) 00:15:16.673 fused_ordering(230) 00:15:16.673 fused_ordering(231) 00:15:16.673 fused_ordering(232) 00:15:16.673 fused_ordering(233) 00:15:16.673 fused_ordering(234) 00:15:16.673 fused_ordering(235) 00:15:16.673 fused_ordering(236) 00:15:16.673 fused_ordering(237) 00:15:16.673 fused_ordering(238) 00:15:16.673 fused_ordering(239) 00:15:16.673 fused_ordering(240) 00:15:16.673 fused_ordering(241) 00:15:16.673 fused_ordering(242) 00:15:16.673 fused_ordering(243) 00:15:16.673 fused_ordering(244) 00:15:16.673 fused_ordering(245) 00:15:16.673 fused_ordering(246) 00:15:16.673 fused_ordering(247) 00:15:16.673 fused_ordering(248) 00:15:16.673 fused_ordering(249) 00:15:16.673 fused_ordering(250) 00:15:16.673 fused_ordering(251) 00:15:16.673 fused_ordering(252) 00:15:16.673 fused_ordering(253) 00:15:16.673 fused_ordering(254) 00:15:16.673 fused_ordering(255) 00:15:16.673 fused_ordering(256) 00:15:16.673 fused_ordering(257) 00:15:16.673 fused_ordering(258) 00:15:16.673 fused_ordering(259) 00:15:16.673 fused_ordering(260) 00:15:16.673 fused_ordering(261) 00:15:16.673 fused_ordering(262) 00:15:16.673 fused_ordering(263) 00:15:16.673 fused_ordering(264) 00:15:16.673 fused_ordering(265) 00:15:16.673 fused_ordering(266) 00:15:16.673 fused_ordering(267) 00:15:16.673 fused_ordering(268) 00:15:16.673 fused_ordering(269) 00:15:16.673 fused_ordering(270) 00:15:16.673 fused_ordering(271) 00:15:16.673 fused_ordering(272) 00:15:16.673 fused_ordering(273) 00:15:16.673 fused_ordering(274) 00:15:16.673 fused_ordering(275) 00:15:16.673 fused_ordering(276) 00:15:16.673 fused_ordering(277) 00:15:16.673 fused_ordering(278) 00:15:16.673 fused_ordering(279) 00:15:16.673 fused_ordering(280) 00:15:16.673 fused_ordering(281) 00:15:16.673 fused_ordering(282) 00:15:16.673 fused_ordering(283) 00:15:16.673 fused_ordering(284) 00:15:16.673 fused_ordering(285) 00:15:16.673 fused_ordering(286) 00:15:16.673 fused_ordering(287) 00:15:16.673 fused_ordering(288) 00:15:16.673 fused_ordering(289) 00:15:16.673 fused_ordering(290) 00:15:16.673 fused_ordering(291) 00:15:16.673 fused_ordering(292) 00:15:16.673 fused_ordering(293) 00:15:16.673 fused_ordering(294) 00:15:16.673 fused_ordering(295) 00:15:16.673 fused_ordering(296) 00:15:16.673 fused_ordering(297) 00:15:16.673 fused_ordering(298) 00:15:16.673 fused_ordering(299) 00:15:16.673 fused_ordering(300) 00:15:16.673 fused_ordering(301) 00:15:16.673 fused_ordering(302) 00:15:16.673 fused_ordering(303) 00:15:16.673 fused_ordering(304) 00:15:16.673 fused_ordering(305) 00:15:16.673 fused_ordering(306) 00:15:16.673 fused_ordering(307) 00:15:16.673 fused_ordering(308) 00:15:16.673 fused_ordering(309) 00:15:16.673 fused_ordering(310) 00:15:16.673 fused_ordering(311) 00:15:16.673 fused_ordering(312) 00:15:16.673 fused_ordering(313) 00:15:16.673 fused_ordering(314) 00:15:16.673 fused_ordering(315) 00:15:16.673 fused_ordering(316) 00:15:16.673 fused_ordering(317) 00:15:16.673 fused_ordering(318) 00:15:16.673 fused_ordering(319) 00:15:16.673 fused_ordering(320) 00:15:16.673 fused_ordering(321) 00:15:16.673 fused_ordering(322) 00:15:16.673 fused_ordering(323) 00:15:16.673 fused_ordering(324) 00:15:16.673 fused_ordering(325) 00:15:16.673 fused_ordering(326) 00:15:16.673 fused_ordering(327) 00:15:16.673 fused_ordering(328) 00:15:16.673 fused_ordering(329) 00:15:16.673 fused_ordering(330) 00:15:16.673 fused_ordering(331) 00:15:16.673 fused_ordering(332) 00:15:16.673 fused_ordering(333) 00:15:16.673 fused_ordering(334) 00:15:16.673 fused_ordering(335) 00:15:16.673 fused_ordering(336) 00:15:16.673 fused_ordering(337) 00:15:16.673 fused_ordering(338) 00:15:16.673 fused_ordering(339) 00:15:16.673 fused_ordering(340) 00:15:16.673 fused_ordering(341) 00:15:16.673 fused_ordering(342) 00:15:16.673 fused_ordering(343) 00:15:16.673 fused_ordering(344) 00:15:16.673 fused_ordering(345) 00:15:16.673 fused_ordering(346) 00:15:16.673 fused_ordering(347) 00:15:16.673 fused_ordering(348) 00:15:16.673 fused_ordering(349) 00:15:16.673 fused_ordering(350) 00:15:16.673 fused_ordering(351) 00:15:16.673 fused_ordering(352) 00:15:16.673 fused_ordering(353) 00:15:16.673 fused_ordering(354) 00:15:16.673 fused_ordering(355) 00:15:16.673 fused_ordering(356) 00:15:16.673 fused_ordering(357) 00:15:16.673 fused_ordering(358) 00:15:16.673 fused_ordering(359) 00:15:16.673 fused_ordering(360) 00:15:16.673 fused_ordering(361) 00:15:16.674 fused_ordering(362) 00:15:16.674 fused_ordering(363) 00:15:16.674 fused_ordering(364) 00:15:16.674 fused_ordering(365) 00:15:16.674 fused_ordering(366) 00:15:16.674 fused_ordering(367) 00:15:16.674 fused_ordering(368) 00:15:16.674 fused_ordering(369) 00:15:16.674 fused_ordering(370) 00:15:16.674 fused_ordering(371) 00:15:16.674 fused_ordering(372) 00:15:16.674 fused_ordering(373) 00:15:16.674 fused_ordering(374) 00:15:16.674 fused_ordering(375) 00:15:16.674 fused_ordering(376) 00:15:16.674 fused_ordering(377) 00:15:16.674 fused_ordering(378) 00:15:16.674 fused_ordering(379) 00:15:16.674 fused_ordering(380) 00:15:16.674 fused_ordering(381) 00:15:16.674 fused_ordering(382) 00:15:16.674 fused_ordering(383) 00:15:16.674 fused_ordering(384) 00:15:16.674 fused_ordering(385) 00:15:16.674 fused_ordering(386) 00:15:16.674 fused_ordering(387) 00:15:16.674 fused_ordering(388) 00:15:16.674 fused_ordering(389) 00:15:16.674 fused_ordering(390) 00:15:16.674 fused_ordering(391) 00:15:16.674 fused_ordering(392) 00:15:16.674 fused_ordering(393) 00:15:16.674 fused_ordering(394) 00:15:16.674 fused_ordering(395) 00:15:16.674 fused_ordering(396) 00:15:16.674 fused_ordering(397) 00:15:16.674 fused_ordering(398) 00:15:16.674 fused_ordering(399) 00:15:16.674 fused_ordering(400) 00:15:16.674 fused_ordering(401) 00:15:16.674 fused_ordering(402) 00:15:16.674 fused_ordering(403) 00:15:16.674 fused_ordering(404) 00:15:16.674 fused_ordering(405) 00:15:16.674 fused_ordering(406) 00:15:16.674 fused_ordering(407) 00:15:16.674 fused_ordering(408) 00:15:16.674 fused_ordering(409) 00:15:16.674 fused_ordering(410) 00:15:16.932 fused_ordering(411) 00:15:16.932 fused_ordering(412) 00:15:16.932 fused_ordering(413) 00:15:16.932 fused_ordering(414) 00:15:16.932 fused_ordering(415) 00:15:16.932 fused_ordering(416) 00:15:16.932 fused_ordering(417) 00:15:16.932 fused_ordering(418) 00:15:16.932 fused_ordering(419) 00:15:16.932 fused_ordering(420) 00:15:16.932 fused_ordering(421) 00:15:16.932 fused_ordering(422) 00:15:16.932 fused_ordering(423) 00:15:16.932 fused_ordering(424) 00:15:16.932 fused_ordering(425) 00:15:16.932 fused_ordering(426) 00:15:16.932 fused_ordering(427) 00:15:16.932 fused_ordering(428) 00:15:16.932 fused_ordering(429) 00:15:16.932 fused_ordering(430) 00:15:16.932 fused_ordering(431) 00:15:16.932 fused_ordering(432) 00:15:16.932 fused_ordering(433) 00:15:16.932 fused_ordering(434) 00:15:16.932 fused_ordering(435) 00:15:16.932 fused_ordering(436) 00:15:16.932 fused_ordering(437) 00:15:16.932 fused_ordering(438) 00:15:16.932 fused_ordering(439) 00:15:16.932 fused_ordering(440) 00:15:16.932 fused_ordering(441) 00:15:16.932 fused_ordering(442) 00:15:16.932 fused_ordering(443) 00:15:16.932 fused_ordering(444) 00:15:16.932 fused_ordering(445) 00:15:16.932 fused_ordering(446) 00:15:16.932 fused_ordering(447) 00:15:16.932 fused_ordering(448) 00:15:16.932 fused_ordering(449) 00:15:16.932 fused_ordering(450) 00:15:16.932 fused_ordering(451) 00:15:16.932 fused_ordering(452) 00:15:16.932 fused_ordering(453) 00:15:16.932 fused_ordering(454) 00:15:16.932 fused_ordering(455) 00:15:16.932 fused_ordering(456) 00:15:16.932 fused_ordering(457) 00:15:16.932 fused_ordering(458) 00:15:16.932 fused_ordering(459) 00:15:16.932 fused_ordering(460) 00:15:16.932 fused_ordering(461) 00:15:16.932 fused_ordering(462) 00:15:16.932 fused_ordering(463) 00:15:16.932 fused_ordering(464) 00:15:16.932 fused_ordering(465) 00:15:16.932 fused_ordering(466) 00:15:16.932 fused_ordering(467) 00:15:16.932 fused_ordering(468) 00:15:16.932 fused_ordering(469) 00:15:16.932 fused_ordering(470) 00:15:16.932 fused_ordering(471) 00:15:16.932 fused_ordering(472) 00:15:16.932 fused_ordering(473) 00:15:16.932 fused_ordering(474) 00:15:16.932 fused_ordering(475) 00:15:16.932 fused_ordering(476) 00:15:16.932 fused_ordering(477) 00:15:16.932 fused_ordering(478) 00:15:16.932 fused_ordering(479) 00:15:16.932 fused_ordering(480) 00:15:16.932 fused_ordering(481) 00:15:16.932 fused_ordering(482) 00:15:16.932 fused_ordering(483) 00:15:16.932 fused_ordering(484) 00:15:16.932 fused_ordering(485) 00:15:16.932 fused_ordering(486) 00:15:16.932 fused_ordering(487) 00:15:16.932 fused_ordering(488) 00:15:16.932 fused_ordering(489) 00:15:16.932 fused_ordering(490) 00:15:16.932 fused_ordering(491) 00:15:16.932 fused_ordering(492) 00:15:16.932 fused_ordering(493) 00:15:16.932 fused_ordering(494) 00:15:16.932 fused_ordering(495) 00:15:16.932 fused_ordering(496) 00:15:16.932 fused_ordering(497) 00:15:16.932 fused_ordering(498) 00:15:16.932 fused_ordering(499) 00:15:16.932 fused_ordering(500) 00:15:16.932 fused_ordering(501) 00:15:16.932 fused_ordering(502) 00:15:16.932 fused_ordering(503) 00:15:16.932 fused_ordering(504) 00:15:16.932 fused_ordering(505) 00:15:16.932 fused_ordering(506) 00:15:16.932 fused_ordering(507) 00:15:16.932 fused_ordering(508) 00:15:16.932 fused_ordering(509) 00:15:16.932 fused_ordering(510) 00:15:16.932 fused_ordering(511) 00:15:16.932 fused_ordering(512) 00:15:16.932 fused_ordering(513) 00:15:16.932 fused_ordering(514) 00:15:16.932 fused_ordering(515) 00:15:16.932 fused_ordering(516) 00:15:16.932 fused_ordering(517) 00:15:16.932 fused_ordering(518) 00:15:16.932 fused_ordering(519) 00:15:16.932 fused_ordering(520) 00:15:16.932 fused_ordering(521) 00:15:16.932 fused_ordering(522) 00:15:16.932 fused_ordering(523) 00:15:16.932 fused_ordering(524) 00:15:16.932 fused_ordering(525) 00:15:16.932 fused_ordering(526) 00:15:16.932 fused_ordering(527) 00:15:16.932 fused_ordering(528) 00:15:16.932 fused_ordering(529) 00:15:16.932 fused_ordering(530) 00:15:16.932 fused_ordering(531) 00:15:16.932 fused_ordering(532) 00:15:16.932 fused_ordering(533) 00:15:16.932 fused_ordering(534) 00:15:16.932 fused_ordering(535) 00:15:16.932 fused_ordering(536) 00:15:16.932 fused_ordering(537) 00:15:16.933 fused_ordering(538) 00:15:16.933 fused_ordering(539) 00:15:16.933 fused_ordering(540) 00:15:16.933 fused_ordering(541) 00:15:16.933 fused_ordering(542) 00:15:16.933 fused_ordering(543) 00:15:16.933 fused_ordering(544) 00:15:16.933 fused_ordering(545) 00:15:16.933 fused_ordering(546) 00:15:16.933 fused_ordering(547) 00:15:16.933 fused_ordering(548) 00:15:16.933 fused_ordering(549) 00:15:16.933 fused_ordering(550) 00:15:16.933 fused_ordering(551) 00:15:16.933 fused_ordering(552) 00:15:16.933 fused_ordering(553) 00:15:16.933 fused_ordering(554) 00:15:16.933 fused_ordering(555) 00:15:16.933 fused_ordering(556) 00:15:16.933 fused_ordering(557) 00:15:16.933 fused_ordering(558) 00:15:16.933 fused_ordering(559) 00:15:16.933 fused_ordering(560) 00:15:16.933 fused_ordering(561) 00:15:16.933 fused_ordering(562) 00:15:16.933 fused_ordering(563) 00:15:16.933 fused_ordering(564) 00:15:16.933 fused_ordering(565) 00:15:16.933 fused_ordering(566) 00:15:16.933 fused_ordering(567) 00:15:16.933 fused_ordering(568) 00:15:16.933 fused_ordering(569) 00:15:16.933 fused_ordering(570) 00:15:16.933 fused_ordering(571) 00:15:16.933 fused_ordering(572) 00:15:16.933 fused_ordering(573) 00:15:16.933 fused_ordering(574) 00:15:16.933 fused_ordering(575) 00:15:16.933 fused_ordering(576) 00:15:16.933 fused_ordering(577) 00:15:16.933 fused_ordering(578) 00:15:16.933 fused_ordering(579) 00:15:16.933 fused_ordering(580) 00:15:16.933 fused_ordering(581) 00:15:16.933 fused_ordering(582) 00:15:16.933 fused_ordering(583) 00:15:16.933 fused_ordering(584) 00:15:16.933 fused_ordering(585) 00:15:16.933 fused_ordering(586) 00:15:16.933 fused_ordering(587) 00:15:16.933 fused_ordering(588) 00:15:16.933 fused_ordering(589) 00:15:16.933 fused_ordering(590) 00:15:16.933 fused_ordering(591) 00:15:16.933 fused_ordering(592) 00:15:16.933 fused_ordering(593) 00:15:16.933 fused_ordering(594) 00:15:16.933 fused_ordering(595) 00:15:16.933 fused_ordering(596) 00:15:16.933 fused_ordering(597) 00:15:16.933 fused_ordering(598) 00:15:16.933 fused_ordering(599) 00:15:16.933 fused_ordering(600) 00:15:16.933 fused_ordering(601) 00:15:16.933 fused_ordering(602) 00:15:16.933 fused_ordering(603) 00:15:16.933 fused_ordering(604) 00:15:16.933 fused_ordering(605) 00:15:16.933 fused_ordering(606) 00:15:16.933 fused_ordering(607) 00:15:16.933 fused_ordering(608) 00:15:16.933 fused_ordering(609) 00:15:16.933 fused_ordering(610) 00:15:16.933 fused_ordering(611) 00:15:16.933 fused_ordering(612) 00:15:16.933 fused_ordering(613) 00:15:16.933 fused_ordering(614) 00:15:16.933 fused_ordering(615) 00:15:16.933 fused_ordering(616) 00:15:16.933 fused_ordering(617) 00:15:16.933 fused_ordering(618) 00:15:16.933 fused_ordering(619) 00:15:16.933 fused_ordering(620) 00:15:16.933 fused_ordering(621) 00:15:16.933 fused_ordering(622) 00:15:16.933 fused_ordering(623) 00:15:16.933 fused_ordering(624) 00:15:16.933 fused_ordering(625) 00:15:16.933 fused_ordering(626) 00:15:16.933 fused_ordering(627) 00:15:16.933 fused_ordering(628) 00:15:16.933 fused_ordering(629) 00:15:16.933 fused_ordering(630) 00:15:16.933 fused_ordering(631) 00:15:16.933 fused_ordering(632) 00:15:16.933 fused_ordering(633) 00:15:16.933 fused_ordering(634) 00:15:16.933 fused_ordering(635) 00:15:16.933 fused_ordering(636) 00:15:16.933 fused_ordering(637) 00:15:16.933 fused_ordering(638) 00:15:16.933 fused_ordering(639) 00:15:16.933 fused_ordering(640) 00:15:16.933 fused_ordering(641) 00:15:16.933 fused_ordering(642) 00:15:16.933 fused_ordering(643) 00:15:16.933 fused_ordering(644) 00:15:16.933 fused_ordering(645) 00:15:16.933 fused_ordering(646) 00:15:16.933 fused_ordering(647) 00:15:16.933 fused_ordering(648) 00:15:16.933 fused_ordering(649) 00:15:16.933 fused_ordering(650) 00:15:16.933 fused_ordering(651) 00:15:16.933 fused_ordering(652) 00:15:16.933 fused_ordering(653) 00:15:16.933 fused_ordering(654) 00:15:16.933 fused_ordering(655) 00:15:16.933 fused_ordering(656) 00:15:16.933 fused_ordering(657) 00:15:16.933 fused_ordering(658) 00:15:16.933 fused_ordering(659) 00:15:16.933 fused_ordering(660) 00:15:16.933 fused_ordering(661) 00:15:16.933 fused_ordering(662) 00:15:16.933 fused_ordering(663) 00:15:16.933 fused_ordering(664) 00:15:16.933 fused_ordering(665) 00:15:16.933 fused_ordering(666) 00:15:16.933 fused_ordering(667) 00:15:16.933 fused_ordering(668) 00:15:16.933 fused_ordering(669) 00:15:16.933 fused_ordering(670) 00:15:16.933 fused_ordering(671) 00:15:16.933 fused_ordering(672) 00:15:16.933 fused_ordering(673) 00:15:16.933 fused_ordering(674) 00:15:16.933 fused_ordering(675) 00:15:16.933 fused_ordering(676) 00:15:16.933 fused_ordering(677) 00:15:16.933 fused_ordering(678) 00:15:16.933 fused_ordering(679) 00:15:16.933 fused_ordering(680) 00:15:16.933 fused_ordering(681) 00:15:16.933 fused_ordering(682) 00:15:16.933 fused_ordering(683) 00:15:16.933 fused_ordering(684) 00:15:16.933 fused_ordering(685) 00:15:16.933 fused_ordering(686) 00:15:16.933 fused_ordering(687) 00:15:16.933 fused_ordering(688) 00:15:16.933 fused_ordering(689) 00:15:16.933 fused_ordering(690) 00:15:16.933 fused_ordering(691) 00:15:16.933 fused_ordering(692) 00:15:16.933 fused_ordering(693) 00:15:16.933 fused_ordering(694) 00:15:16.933 fused_ordering(695) 00:15:16.933 fused_ordering(696) 00:15:16.933 fused_ordering(697) 00:15:16.933 fused_ordering(698) 00:15:16.933 fused_ordering(699) 00:15:16.933 fused_ordering(700) 00:15:16.933 fused_ordering(701) 00:15:16.933 fused_ordering(702) 00:15:16.933 fused_ordering(703) 00:15:16.933 fused_ordering(704) 00:15:16.933 fused_ordering(705) 00:15:16.933 fused_ordering(706) 00:15:16.933 fused_ordering(707) 00:15:16.933 fused_ordering(708) 00:15:16.933 fused_ordering(709) 00:15:16.933 fused_ordering(710) 00:15:16.933 fused_ordering(711) 00:15:16.933 fused_ordering(712) 00:15:16.933 fused_ordering(713) 00:15:16.933 fused_ordering(714) 00:15:16.933 fused_ordering(715) 00:15:16.933 fused_ordering(716) 00:15:16.933 fused_ordering(717) 00:15:16.933 fused_ordering(718) 00:15:16.933 fused_ordering(719) 00:15:16.933 fused_ordering(720) 00:15:16.933 fused_ordering(721) 00:15:16.933 fused_ordering(722) 00:15:16.933 fused_ordering(723) 00:15:16.933 fused_ordering(724) 00:15:16.933 fused_ordering(725) 00:15:16.933 fused_ordering(726) 00:15:16.933 fused_ordering(727) 00:15:16.933 fused_ordering(728) 00:15:16.933 fused_ordering(729) 00:15:16.933 fused_ordering(730) 00:15:16.933 fused_ordering(731) 00:15:16.933 fused_ordering(732) 00:15:16.933 fused_ordering(733) 00:15:16.933 fused_ordering(734) 00:15:16.933 fused_ordering(735) 00:15:16.933 fused_ordering(736) 00:15:16.933 fused_ordering(737) 00:15:16.933 fused_ordering(738) 00:15:16.933 fused_ordering(739) 00:15:16.933 fused_ordering(740) 00:15:16.933 fused_ordering(741) 00:15:16.933 fused_ordering(742) 00:15:16.933 fused_ordering(743) 00:15:16.933 fused_ordering(744) 00:15:16.933 fused_ordering(745) 00:15:16.933 fused_ordering(746) 00:15:16.933 fused_ordering(747) 00:15:16.933 fused_ordering(748) 00:15:16.933 fused_ordering(749) 00:15:16.933 fused_ordering(750) 00:15:16.933 fused_ordering(751) 00:15:16.933 fused_ordering(752) 00:15:16.933 fused_ordering(753) 00:15:16.933 fused_ordering(754) 00:15:16.933 fused_ordering(755) 00:15:16.933 fused_ordering(756) 00:15:16.933 fused_ordering(757) 00:15:16.933 fused_ordering(758) 00:15:16.933 fused_ordering(759) 00:15:16.933 fused_ordering(760) 00:15:16.933 fused_ordering(761) 00:15:16.933 fused_ordering(762) 00:15:16.933 fused_ordering(763) 00:15:16.933 fused_ordering(764) 00:15:16.933 fused_ordering(765) 00:15:16.933 fused_ordering(766) 00:15:16.933 fused_ordering(767) 00:15:16.933 fused_ordering(768) 00:15:16.933 fused_ordering(769) 00:15:16.933 fused_ordering(770) 00:15:16.933 fused_ordering(771) 00:15:16.933 fused_ordering(772) 00:15:16.933 fused_ordering(773) 00:15:16.933 fused_ordering(774) 00:15:16.933 fused_ordering(775) 00:15:16.933 fused_ordering(776) 00:15:16.933 fused_ordering(777) 00:15:16.933 fused_ordering(778) 00:15:16.933 fused_ordering(779) 00:15:16.933 fused_ordering(780) 00:15:16.933 fused_ordering(781) 00:15:16.933 fused_ordering(782) 00:15:16.933 fused_ordering(783) 00:15:16.933 fused_ordering(784) 00:15:16.933 fused_ordering(785) 00:15:16.933 fused_ordering(786) 00:15:16.933 fused_ordering(787) 00:15:16.933 fused_ordering(788) 00:15:16.933 fused_ordering(789) 00:15:16.933 fused_ordering(790) 00:15:16.933 fused_ordering(791) 00:15:16.933 fused_ordering(792) 00:15:16.933 fused_ordering(793) 00:15:16.933 fused_ordering(794) 00:15:16.933 fused_ordering(795) 00:15:16.933 fused_ordering(796) 00:15:16.933 fused_ordering(797) 00:15:16.933 fused_ordering(798) 00:15:16.933 fused_ordering(799) 00:15:16.933 fused_ordering(800) 00:15:16.933 fused_ordering(801) 00:15:16.933 fused_ordering(802) 00:15:16.933 fused_ordering(803) 00:15:16.933 fused_ordering(804) 00:15:16.933 fused_ordering(805) 00:15:16.933 fused_ordering(806) 00:15:16.933 fused_ordering(807) 00:15:16.933 fused_ordering(808) 00:15:16.934 fused_ordering(809) 00:15:16.934 fused_ordering(810) 00:15:16.934 fused_ordering(811) 00:15:16.934 fused_ordering(812) 00:15:16.934 fused_ordering(813) 00:15:16.934 fused_ordering(814) 00:15:16.934 fused_ordering(815) 00:15:16.934 fused_ordering(816) 00:15:16.934 fused_ordering(817) 00:15:16.934 fused_ordering(818) 00:15:16.934 fused_ordering(819) 00:15:16.934 fused_ordering(820) 00:15:17.191 fused_ordering(821) 00:15:17.191 fused_ordering(822) 00:15:17.191 fused_ordering(823) 00:15:17.191 fused_ordering(824) 00:15:17.191 fused_ordering(825) 00:15:17.191 fused_ordering(826) 00:15:17.191 fused_ordering(827) 00:15:17.191 fused_ordering(828) 00:15:17.191 fused_ordering(829) 00:15:17.191 fused_ordering(830) 00:15:17.191 fused_ordering(831) 00:15:17.191 fused_ordering(832) 00:15:17.191 fused_ordering(833) 00:15:17.191 fused_ordering(834) 00:15:17.191 fused_ordering(835) 00:15:17.191 fused_ordering(836) 00:15:17.191 fused_ordering(837) 00:15:17.191 fused_ordering(838) 00:15:17.191 fused_ordering(839) 00:15:17.191 fused_ordering(840) 00:15:17.191 fused_ordering(841) 00:15:17.191 fused_ordering(842) 00:15:17.191 fused_ordering(843) 00:15:17.191 fused_ordering(844) 00:15:17.191 fused_ordering(845) 00:15:17.191 fused_ordering(846) 00:15:17.191 fused_ordering(847) 00:15:17.191 fused_ordering(848) 00:15:17.191 fused_ordering(849) 00:15:17.191 fused_ordering(850) 00:15:17.191 fused_ordering(851) 00:15:17.191 fused_ordering(852) 00:15:17.191 fused_ordering(853) 00:15:17.191 fused_ordering(854) 00:15:17.191 fused_ordering(855) 00:15:17.191 fused_ordering(856) 00:15:17.192 fused_ordering(857) 00:15:17.192 fused_ordering(858) 00:15:17.192 fused_ordering(859) 00:15:17.192 fused_ordering(860) 00:15:17.192 fused_ordering(861) 00:15:17.192 fused_ordering(862) 00:15:17.192 fused_ordering(863) 00:15:17.192 fused_ordering(864) 00:15:17.192 fused_ordering(865) 00:15:17.192 fused_ordering(866) 00:15:17.192 fused_ordering(867) 00:15:17.192 fused_ordering(868) 00:15:17.192 fused_ordering(869) 00:15:17.192 fused_ordering(870) 00:15:17.192 fused_ordering(871) 00:15:17.192 fused_ordering(872) 00:15:17.192 fused_ordering(873) 00:15:17.192 fused_ordering(874) 00:15:17.192 fused_ordering(875) 00:15:17.192 fused_ordering(876) 00:15:17.192 fused_ordering(877) 00:15:17.192 fused_ordering(878) 00:15:17.192 fused_ordering(879) 00:15:17.192 fused_ordering(880) 00:15:17.192 fused_ordering(881) 00:15:17.192 fused_ordering(882) 00:15:17.192 fused_ordering(883) 00:15:17.192 fused_ordering(884) 00:15:17.192 fused_ordering(885) 00:15:17.192 fused_ordering(886) 00:15:17.192 fused_ordering(887) 00:15:17.192 fused_ordering(888) 00:15:17.192 fused_ordering(889) 00:15:17.192 fused_ordering(890) 00:15:17.192 fused_ordering(891) 00:15:17.192 fused_ordering(892) 00:15:17.192 fused_ordering(893) 00:15:17.192 fused_ordering(894) 00:15:17.192 fused_ordering(895) 00:15:17.192 fused_ordering(896) 00:15:17.192 fused_ordering(897) 00:15:17.192 fused_ordering(898) 00:15:17.192 fused_ordering(899) 00:15:17.192 fused_ordering(900) 00:15:17.192 fused_ordering(901) 00:15:17.192 fused_ordering(902) 00:15:17.192 fused_ordering(903) 00:15:17.192 fused_ordering(904) 00:15:17.192 fused_ordering(905) 00:15:17.192 fused_ordering(906) 00:15:17.192 fused_ordering(907) 00:15:17.192 fused_ordering(908) 00:15:17.192 fused_ordering(909) 00:15:17.192 fused_ordering(910) 00:15:17.192 fused_ordering(911) 00:15:17.192 fused_ordering(912) 00:15:17.192 fused_ordering(913) 00:15:17.192 fused_ordering(914) 00:15:17.192 fused_ordering(915) 00:15:17.192 fused_ordering(916) 00:15:17.192 fused_ordering(917) 00:15:17.192 fused_ordering(918) 00:15:17.192 fused_ordering(919) 00:15:17.192 fused_ordering(920) 00:15:17.192 fused_ordering(921) 00:15:17.192 fused_ordering(922) 00:15:17.192 fused_ordering(923) 00:15:17.192 fused_ordering(924) 00:15:17.192 fused_ordering(925) 00:15:17.192 fused_ordering(926) 00:15:17.192 fused_ordering(927) 00:15:17.192 fused_ordering(928) 00:15:17.192 fused_ordering(929) 00:15:17.192 fused_ordering(930) 00:15:17.192 fused_ordering(931) 00:15:17.192 fused_ordering(932) 00:15:17.192 fused_ordering(933) 00:15:17.192 fused_ordering(934) 00:15:17.192 fused_ordering(935) 00:15:17.192 fused_ordering(936) 00:15:17.192 fused_ordering(937) 00:15:17.192 fused_ordering(938) 00:15:17.192 fused_ordering(939) 00:15:17.192 fused_ordering(940) 00:15:17.192 fused_ordering(941) 00:15:17.192 fused_ordering(942) 00:15:17.192 fused_ordering(943) 00:15:17.192 fused_ordering(944) 00:15:17.192 fused_ordering(945) 00:15:17.192 fused_ordering(946) 00:15:17.192 fused_ordering(947) 00:15:17.192 fused_ordering(948) 00:15:17.192 fused_ordering(949) 00:15:17.192 fused_ordering(950) 00:15:17.192 fused_ordering(951) 00:15:17.192 fused_ordering(952) 00:15:17.192 fused_ordering(953) 00:15:17.192 fused_ordering(954) 00:15:17.192 fused_ordering(955) 00:15:17.192 fused_ordering(956) 00:15:17.192 fused_ordering(957) 00:15:17.192 fused_ordering(958) 00:15:17.192 fused_ordering(959) 00:15:17.192 fused_ordering(960) 00:15:17.192 fused_ordering(961) 00:15:17.192 fused_ordering(962) 00:15:17.192 fused_ordering(963) 00:15:17.192 fused_ordering(964) 00:15:17.192 fused_ordering(965) 00:15:17.192 fused_ordering(966) 00:15:17.192 fused_ordering(967) 00:15:17.192 fused_ordering(968) 00:15:17.192 fused_ordering(969) 00:15:17.192 fused_ordering(970) 00:15:17.192 fused_ordering(971) 00:15:17.192 fused_ordering(972) 00:15:17.192 fused_ordering(973) 00:15:17.192 fused_ordering(974) 00:15:17.192 fused_ordering(975) 00:15:17.192 fused_ordering(976) 00:15:17.192 fused_ordering(977) 00:15:17.192 fused_ordering(978) 00:15:17.192 fused_ordering(979) 00:15:17.192 fused_ordering(980) 00:15:17.192 fused_ordering(981) 00:15:17.192 fused_ordering(982) 00:15:17.192 fused_ordering(983) 00:15:17.192 fused_ordering(984) 00:15:17.192 fused_ordering(985) 00:15:17.192 fused_ordering(986) 00:15:17.192 fused_ordering(987) 00:15:17.192 fused_ordering(988) 00:15:17.192 fused_ordering(989) 00:15:17.192 fused_ordering(990) 00:15:17.192 fused_ordering(991) 00:15:17.192 fused_ordering(992) 00:15:17.192 fused_ordering(993) 00:15:17.192 fused_ordering(994) 00:15:17.192 fused_ordering(995) 00:15:17.192 fused_ordering(996) 00:15:17.192 fused_ordering(997) 00:15:17.192 fused_ordering(998) 00:15:17.192 fused_ordering(999) 00:15:17.192 fused_ordering(1000) 00:15:17.192 fused_ordering(1001) 00:15:17.192 fused_ordering(1002) 00:15:17.192 fused_ordering(1003) 00:15:17.192 fused_ordering(1004) 00:15:17.192 fused_ordering(1005) 00:15:17.192 fused_ordering(1006) 00:15:17.192 fused_ordering(1007) 00:15:17.192 fused_ordering(1008) 00:15:17.192 fused_ordering(1009) 00:15:17.192 fused_ordering(1010) 00:15:17.192 fused_ordering(1011) 00:15:17.192 fused_ordering(1012) 00:15:17.192 fused_ordering(1013) 00:15:17.192 fused_ordering(1014) 00:15:17.192 fused_ordering(1015) 00:15:17.192 fused_ordering(1016) 00:15:17.192 fused_ordering(1017) 00:15:17.192 fused_ordering(1018) 00:15:17.192 fused_ordering(1019) 00:15:17.192 fused_ordering(1020) 00:15:17.192 fused_ordering(1021) 00:15:17.192 fused_ordering(1022) 00:15:17.192 fused_ordering(1023) 00:15:17.192 14:13:44 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:17.192 14:13:44 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:17.193 rmmod nvme_rdma 00:15:17.193 rmmod nvme_fabrics 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 66819 ']' 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 66819 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 66819 ']' 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 66819 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66819 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66819' 00:15:17.193 killing process with pid 66819 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 66819 00:15:17.193 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 66819 00:15:17.451 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.451 14:13:44 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:17.451 00:15:17.451 real 0m4.102s 00:15:17.451 user 0m3.318s 00:15:17.451 sys 0m1.976s 00:15:17.451 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:17.451 14:13:44 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:17.451 ************************************ 00:15:17.451 END TEST nvmf_fused_ordering 00:15:17.451 ************************************ 00:15:17.709 14:13:44 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:15:17.709 14:13:44 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:17.709 14:13:44 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:17.709 14:13:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:17.709 ************************************ 00:15:17.709 START TEST nvmf_delete_subsystem 00:15:17.709 ************************************ 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:15:17.709 * Looking for test storage... 00:15:17.709 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.709 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.710 14:13:44 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:15:20.237 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:15:20.237 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:15:20.237 Found net devices under 0000:81:00.0: mlx_0_0 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:15:20.237 Found net devices under 0000:81:00.1: mlx_0_1 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.237 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:20.238 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:20.238 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:15:20.238 altname enp129s0f0np0 00:15:20.238 inet 192.168.100.8/24 scope global mlx_0_0 00:15:20.238 valid_lft forever preferred_lft forever 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:20.238 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:20.238 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:15:20.238 altname enp129s0f1np1 00:15:20.238 inet 192.168.100.9/24 scope global mlx_0_1 00:15:20.238 valid_lft forever preferred_lft forever 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:20.238 192.168.100.9' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:20.238 192.168.100.9' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:20.238 192.168.100.9' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=69028 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 69028 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 69028 ']' 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:20.238 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.238 [2024-07-24 14:13:47.521305] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:20.238 [2024-07-24 14:13:47.521393] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.238 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.238 [2024-07-24 14:13:47.593309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:20.497 [2024-07-24 14:13:47.688275] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.497 [2024-07-24 14:13:47.688353] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.497 [2024-07-24 14:13:47.688370] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.497 [2024-07-24 14:13:47.688383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.497 [2024-07-24 14:13:47.688395] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.497 [2024-07-24 14:13:47.691815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.497 [2024-07-24 14:13:47.691835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.497 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.497 [2024-07-24 14:13:47.860901] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f54400/0x1f588b0) succeed. 00:15:20.756 [2024-07-24 14:13:47.873004] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f558b0/0x1f99f40) succeed. 00:15:20.756 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.757 [2024-07-24 14:13:47.972255] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.757 NULL1 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.757 Delay0 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:20.757 14:13:47 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.757 14:13:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=69051 00:15:20.757 14:13:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:20.757 14:13:48 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:20.757 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.757 [2024-07-24 14:13:48.070667] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:22.654 14:13:50 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.654 14:13:50 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.654 14:13:50 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:24.025 NVMe io qpair process completion error 00:15:24.025 NVMe io qpair process completion error 00:15:24.025 NVMe io qpair process completion error 00:15:24.025 NVMe io qpair process completion error 00:15:24.025 NVMe io qpair process completion error 00:15:24.025 NVMe io qpair process completion error 00:15:24.025 NVMe io qpair process completion error 00:15:24.025 14:13:51 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.025 14:13:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:24.025 14:13:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 69051 00:15:24.025 14:13:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:24.589 14:13:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:24.589 14:13:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 69051 00:15:24.589 14:13:51 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 starting I/O failed: -6 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 Write completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.848 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 starting I/O failed: -6 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Write completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.849 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Write completed with error (sct=0, sc=8) 00:15:24.850 Write completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Write completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Write completed with error (sct=0, sc=8) 00:15:24.850 Write completed with error (sct=0, sc=8) 00:15:24.850 Write completed with error (sct=0, sc=8) 00:15:24.850 Read completed with error (sct=0, sc=8) 00:15:24.850 Initializing NVMe Controllers 00:15:24.850 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.850 Controller IO queue size 128, less than required. 00:15:24.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.850 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:24.850 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:24.850 Initialization complete. Launching workers. 00:15:24.850 ======================================================== 00:15:24.850 Latency(us) 00:15:24.850 Device Information : IOPS MiB/s Average min max 00:15:24.850 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.56 0.04 1594615.00 1001135.35 2973952.77 00:15:24.850 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.56 0.04 1592821.91 1000300.52 2972686.21 00:15:24.850 ======================================================== 00:15:24.850 Total : 161.12 0.08 1593718.46 1000300.52 2973952.77 00:15:24.850 00:15:24.850 [2024-07-24 14:13:52.161534] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:15:24.850 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:24.850 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 69051 00:15:24.850 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:24.850 [2024-07-24 14:13:52.178660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:24.850 [2024-07-24 14:13:52.178693] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:24.850 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 69051 00:15:25.417 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (69051) - No such process 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 69051 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 69051 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 69051 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.417 [2024-07-24 14:13:52.680884] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=69708 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:25.417 14:13:52 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:25.417 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.417 [2024-07-24 14:13:52.753950] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:25.982 14:13:53 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:25.983 14:13:53 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:25.983 14:13:53 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:26.548 14:13:53 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:26.548 14:13:53 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:26.548 14:13:53 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:27.115 14:13:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:27.115 14:13:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:27.115 14:13:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:27.372 14:13:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:27.372 14:13:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:27.372 14:13:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:27.937 14:13:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:27.937 14:13:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:27.937 14:13:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:28.504 14:13:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:28.504 14:13:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:28.504 14:13:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:29.069 14:13:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:29.069 14:13:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:29.069 14:13:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:29.635 14:13:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:29.635 14:13:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:29.635 14:13:56 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:29.892 14:13:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:29.892 14:13:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:29.892 14:13:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:30.458 14:13:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:30.458 14:13:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:30.458 14:13:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.023 14:13:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.023 14:13:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:31.023 14:13:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.588 14:13:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.588 14:13:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:31.588 14:13:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.153 14:13:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.153 14:13:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:32.153 14:13:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.411 14:13:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.411 14:13:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:32.411 14:13:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.670 Initializing NVMe Controllers 00:15:32.670 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:32.670 Controller IO queue size 128, less than required. 00:15:32.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:32.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:32.670 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:32.671 Initialization complete. Launching workers. 00:15:32.671 ======================================================== 00:15:32.671 Latency(us) 00:15:32.671 Device Information : IOPS MiB/s Average min max 00:15:32.671 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001335.92 1000055.74 1004214.06 00:15:32.671 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002701.33 1000116.65 1006603.86 00:15:32.671 ======================================================== 00:15:32.671 Total : 256.00 0.12 1002018.63 1000055.74 1006603.86 00:15:32.671 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 69708 00:15:32.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (69708) - No such process 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 69708 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:32.929 rmmod nvme_rdma 00:15:32.929 rmmod nvme_fabrics 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 69028 ']' 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 69028 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 69028 ']' 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 69028 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69028 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69028' 00:15:32.929 killing process with pid 69028 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 69028 00:15:32.929 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 69028 00:15:33.496 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:33.496 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:33.496 00:15:33.496 real 0m15.719s 00:15:33.496 user 0m47.868s 00:15:33.496 sys 0m2.866s 00:15:33.496 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:33.496 14:14:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:33.496 ************************************ 00:15:33.496 END TEST nvmf_delete_subsystem 00:15:33.496 ************************************ 00:15:33.496 14:14:00 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:15:33.496 14:14:00 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:33.496 14:14:00 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:33.496 14:14:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:33.496 ************************************ 00:15:33.496 START TEST nvmf_ns_masking 00:15:33.496 ************************************ 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:15:33.496 * Looking for test storage... 00:15:33.496 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=a6f6719e-568f-4445-b844-bebf18b772c4 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.496 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.497 14:14:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.497 14:14:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.497 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:33.497 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:33.497 14:14:00 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.497 14:14:00 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:15:36.027 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:15:36.027 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:15:36.027 Found net devices under 0000:81:00.0: mlx_0_0 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:15:36.027 Found net devices under 0000:81:00.1: mlx_0_1 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:36.027 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:36.028 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:36.028 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:15:36.028 altname enp129s0f0np0 00:15:36.028 inet 192.168.100.8/24 scope global mlx_0_0 00:15:36.028 valid_lft forever preferred_lft forever 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:36.028 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:36.028 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:15:36.028 altname enp129s0f1np1 00:15:36.028 inet 192.168.100.9/24 scope global mlx_0_1 00:15:36.028 valid_lft forever preferred_lft forever 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:36.028 192.168.100.9' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:36.028 192.168.100.9' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:36.028 192.168.100.9' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72583 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72583 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 72583 ']' 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:36.028 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.029 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:36.029 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.029 [2024-07-24 14:14:03.326240] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:36.029 [2024-07-24 14:14:03.326311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.029 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.287 [2024-07-24 14:14:03.401902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.287 [2024-07-24 14:14:03.494047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.287 [2024-07-24 14:14:03.494120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.287 [2024-07-24 14:14:03.494146] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.287 [2024-07-24 14:14:03.494160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.287 [2024-07-24 14:14:03.494173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.288 [2024-07-24 14:14:03.494237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.288 [2024-07-24 14:14:03.494291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.288 [2024-07-24 14:14:03.494407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.288 [2024-07-24 14:14:03.494411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.288 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:36.288 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:15:36.288 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.288 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.288 14:14:03 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.288 14:14:03 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.288 14:14:03 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:36.567 [2024-07-24 14:14:03.868642] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe239e0/0xe27ed0) succeed. 00:15:36.567 [2024-07-24 14:14:03.879507] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe24fd0/0xe69560) succeed. 00:15:36.865 14:14:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:36.865 14:14:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:36.865 14:14:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:37.123 Malloc1 00:15:37.123 14:14:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.382 Malloc2 00:15:37.382 14:14:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.640 14:14:04 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:37.899 14:14:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:38.156 [2024-07-24 14:14:05.269889] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:38.157 14:14:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:38.157 14:14:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a6f6719e-568f-4445-b844-bebf18b772c4 -a 192.168.100.8 -s 4420 -i 4 00:15:38.415 14:14:05 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.415 14:14:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:38.415 14:14:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.415 14:14:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:38.415 14:14:05 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:40.315 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:40.573 [ 0]:0x1 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=009be80b512f40f2812dcb1eef8f38cb 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 009be80b512f40f2812dcb1eef8f38cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.573 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:40.832 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:40.832 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:40.832 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:40.832 [ 0]:0x1 00:15:40.832 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.832 14:14:07 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=009be80b512f40f2812dcb1eef8f38cb 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 009be80b512f40f2812dcb1eef8f38cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:40.832 [ 1]:0x2 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=106ed401f0444050b87399bc233014ad 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 106ed401f0444050b87399bc233014ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:40.832 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:41.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.398 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.398 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:41.656 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:41.656 14:14:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a6f6719e-568f-4445-b844-bebf18b772c4 -a 192.168.100.8 -s 4420 -i 4 00:15:42.222 14:14:09 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:42.222 14:14:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:42.222 14:14:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.222 14:14:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:42.222 14:14:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:42.222 14:14:09 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:44.120 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:44.121 [ 0]:0x2 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=106ed401f0444050b87399bc233014ad 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 106ed401f0444050b87399bc233014ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.121 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:44.379 [ 0]:0x1 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=009be80b512f40f2812dcb1eef8f38cb 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 009be80b512f40f2812dcb1eef8f38cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:44.379 [ 1]:0x2 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.379 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.637 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=106ed401f0444050b87399bc233014ad 00:15:44.637 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 106ed401f0444050b87399bc233014ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.637 14:14:11 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:44.637 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:44.894 [ 0]:0x2 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=106ed401f0444050b87399bc233014ad 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 106ed401f0444050b87399bc233014ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:44.894 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.152 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:45.410 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:45.410 14:14:12 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a6f6719e-568f-4445-b844-bebf18b772c4 -a 192.168.100.8 -s 4420 -i 4 00:15:45.975 14:14:13 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:45.975 14:14:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:45.975 14:14:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.975 14:14:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:45.976 14:14:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:45.976 14:14:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:47.873 [ 0]:0x1 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=009be80b512f40f2812dcb1eef8f38cb 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 009be80b512f40f2812dcb1eef8f38cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:47.873 [ 1]:0x2 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.873 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=106ed401f0444050b87399bc233014ad 00:15:47.874 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 106ed401f0444050b87399bc233014ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.874 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.131 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:48.389 [ 0]:0x2 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=106ed401f0444050b87399bc233014ad 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 106ed401f0444050b87399bc233014ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:15:48.389 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:48.647 [2024-07-24 14:14:15.811096] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:48.647 request: 00:15:48.647 { 00:15:48.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.647 "nsid": 2, 00:15:48.647 "host": "nqn.2016-06.io.spdk:host1", 00:15:48.647 "method": "nvmf_ns_remove_host", 00:15:48.647 "req_id": 1 00:15:48.647 } 00:15:48.647 Got JSON-RPC error response 00:15:48.647 response: 00:15:48.647 { 00:15:48.647 "code": -32602, 00:15:48.647 "message": "Invalid parameters" 00:15:48.647 } 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:48.647 [ 0]:0x2 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=106ed401f0444050b87399bc233014ad 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 106ed401f0444050b87399bc233014ad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:48.647 14:14:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.905 14:14:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.163 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:49.163 rmmod nvme_rdma 00:15:49.422 rmmod nvme_fabrics 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72583 ']' 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72583 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 72583 ']' 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 72583 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72583 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72583' 00:15:49.422 killing process with pid 72583 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 72583 00:15:49.422 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 72583 00:15:49.681 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.681 14:14:16 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:49.681 00:15:49.681 real 0m16.340s 00:15:49.681 user 0m58.116s 00:15:49.681 sys 0m3.139s 00:15:49.681 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:49.681 14:14:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:49.681 ************************************ 00:15:49.681 END TEST nvmf_ns_masking 00:15:49.681 ************************************ 00:15:49.681 14:14:16 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:49.681 14:14:16 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:49.681 14:14:16 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:49.681 14:14:16 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:49.681 14:14:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:49.681 ************************************ 00:15:49.681 START TEST nvmf_nvme_cli 00:15:49.681 ************************************ 00:15:49.681 14:14:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:15:49.941 * Looking for test storage... 00:15:49.941 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.941 14:14:17 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.942 14:14:17 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:15:52.512 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:15:52.512 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:15:52.512 Found net devices under 0000:81:00.0: mlx_0_0 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:15:52.512 Found net devices under 0000:81:00.1: mlx_0_1 00:15:52.512 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:52.513 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:52.513 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:15:52.513 altname enp129s0f0np0 00:15:52.513 inet 192.168.100.8/24 scope global mlx_0_0 00:15:52.513 valid_lft forever preferred_lft forever 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:52.513 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:52.513 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:15:52.513 altname enp129s0f1np1 00:15:52.513 inet 192.168.100.9/24 scope global mlx_0_1 00:15:52.513 valid_lft forever preferred_lft forever 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:52.513 192.168.100.9' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:52.513 192.168.100.9' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:52.513 192.168.100.9' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=76291 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 76291 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 76291 ']' 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.513 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:52.514 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:52.514 [2024-07-24 14:14:19.630904] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:15:52.514 [2024-07-24 14:14:19.630981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.514 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.514 [2024-07-24 14:14:19.697507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.514 [2024-07-24 14:14:19.784827] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.514 [2024-07-24 14:14:19.784904] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.514 [2024-07-24 14:14:19.784918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.514 [2024-07-24 14:14:19.784929] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.514 [2024-07-24 14:14:19.784939] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.514 [2024-07-24 14:14:19.784994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.514 [2024-07-24 14:14:19.785053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.514 [2024-07-24 14:14:19.785119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.514 [2024-07-24 14:14:19.785122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.772 14:14:19 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:52.772 [2024-07-24 14:14:19.965354] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16d19e0/0x16d5ed0) succeed. 00:15:52.772 [2024-07-24 14:14:19.976327] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16d2fd0/0x1717560) succeed. 00:15:52.772 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.772 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:52.772 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.772 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 Malloc0 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 Malloc1 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:53.031 [2024-07-24 14:14:20.203091] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -a 192.168.100.8 -s 4420 00:15:53.031 00:15:53.031 Discovery Log Number of Records 2, Generation counter 2 00:15:53.031 =====Discovery Log Entry 0====== 00:15:53.031 trtype: rdma 00:15:53.031 adrfam: ipv4 00:15:53.031 subtype: current discovery subsystem 00:15:53.031 treq: not required 00:15:53.031 portid: 0 00:15:53.031 trsvcid: 4420 00:15:53.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:53.031 traddr: 192.168.100.8 00:15:53.031 eflags: explicit discovery connections, duplicate discovery information 00:15:53.031 rdma_prtype: not specified 00:15:53.031 rdma_qptype: connected 00:15:53.031 rdma_cms: rdma-cm 00:15:53.031 rdma_pkey: 0x0000 00:15:53.031 =====Discovery Log Entry 1====== 00:15:53.031 trtype: rdma 00:15:53.031 adrfam: ipv4 00:15:53.031 subtype: nvme subsystem 00:15:53.031 treq: not required 00:15:53.031 portid: 0 00:15:53.031 trsvcid: 4420 00:15:53.031 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:53.031 traddr: 192.168.100.8 00:15:53.031 eflags: none 00:15:53.031 rdma_prtype: not specified 00:15:53.031 rdma_qptype: connected 00:15:53.031 rdma_cms: rdma-cm 00:15:53.031 rdma_pkey: 0x0000 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:53.031 14:14:20 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:54.400 14:14:21 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:54.400 14:14:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:54.400 14:14:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.400 14:14:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:54.400 14:14:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:54.400 14:14:21 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:56.295 /dev/nvme0n1 ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:56.295 14:14:23 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:57.666 rmmod nvme_rdma 00:15:57.666 rmmod nvme_fabrics 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.666 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 76291 ']' 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 76291 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 76291 ']' 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 76291 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76291 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76291' 00:15:57.667 killing process with pid 76291 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 76291 00:15:57.667 14:14:24 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 76291 00:15:57.924 14:14:25 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.924 14:14:25 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:57.924 00:15:57.924 real 0m8.055s 00:15:57.924 user 0m21.746s 00:15:57.924 sys 0m2.255s 00:15:57.924 14:14:25 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:57.924 14:14:25 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.924 ************************************ 00:15:57.924 END TEST nvmf_nvme_cli 00:15:57.924 ************************************ 00:15:57.924 14:14:25 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:15:57.924 14:14:25 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:15:57.924 14:14:25 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:57.924 14:14:25 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:57.924 14:14:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:57.924 ************************************ 00:15:57.924 START TEST nvmf_host_management 00:15:57.924 ************************************ 00:15:57.924 14:14:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:15:57.924 * Looking for test storage... 00:15:57.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:57.924 14:14:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.924 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:15:57.924 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.924 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.924 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.924 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:15:57.925 14:14:25 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:00.454 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:16:00.455 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:16:00.455 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:16:00.455 Found net devices under 0000:81:00.0: mlx_0_0 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:16:00.455 Found net devices under 0000:81:00.1: mlx_0_1 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:00.455 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:00.455 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:16:00.455 altname enp129s0f0np0 00:16:00.455 inet 192.168.100.8/24 scope global mlx_0_0 00:16:00.455 valid_lft forever preferred_lft forever 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:00.455 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:00.455 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:16:00.455 altname enp129s0f1np1 00:16:00.455 inet 192.168.100.9/24 scope global mlx_0_1 00:16:00.455 valid_lft forever preferred_lft forever 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:00.455 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:00.456 192.168.100.9' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:00.456 192.168.100.9' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:00.456 192.168.100.9' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=78981 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 78981 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 78981 ']' 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:00.456 14:14:27 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.456 [2024-07-24 14:14:27.760495] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:00.456 [2024-07-24 14:14:27.760590] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.456 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.714 [2024-07-24 14:14:27.830609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.714 [2024-07-24 14:14:27.915215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.714 [2024-07-24 14:14:27.915276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.714 [2024-07-24 14:14:27.915300] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.714 [2024-07-24 14:14:27.915310] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.714 [2024-07-24 14:14:27.915320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.714 [2024-07-24 14:14:27.915402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.714 [2024-07-24 14:14:27.915466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.714 [2024-07-24 14:14:27.915535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:00.714 [2024-07-24 14:14:27.915537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.714 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.972 [2024-07-24 14:14:28.098426] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc25cd0/0xc2a1c0) succeed. 00:16:00.972 [2024-07-24 14:14:28.109640] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc272c0/0xc6b850) succeed. 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:00.972 Malloc0 00:16:00.972 [2024-07-24 14:14:28.325268] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.972 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.229 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=79122 00:16:01.229 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 79122 /var/tmp/bdevperf.sock 00:16:01.229 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 79122 ']' 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:01.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:01.230 { 00:16:01.230 "params": { 00:16:01.230 "name": "Nvme$subsystem", 00:16:01.230 "trtype": "$TEST_TRANSPORT", 00:16:01.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:01.230 "adrfam": "ipv4", 00:16:01.230 "trsvcid": "$NVMF_PORT", 00:16:01.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:01.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:01.230 "hdgst": ${hdgst:-false}, 00:16:01.230 "ddgst": ${ddgst:-false} 00:16:01.230 }, 00:16:01.230 "method": "bdev_nvme_attach_controller" 00:16:01.230 } 00:16:01.230 EOF 00:16:01.230 )") 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:01.230 14:14:28 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:01.230 "params": { 00:16:01.230 "name": "Nvme0", 00:16:01.230 "trtype": "rdma", 00:16:01.230 "traddr": "192.168.100.8", 00:16:01.230 "adrfam": "ipv4", 00:16:01.230 "trsvcid": "4420", 00:16:01.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:01.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:01.230 "hdgst": false, 00:16:01.230 "ddgst": false 00:16:01.230 }, 00:16:01.230 "method": "bdev_nvme_attach_controller" 00:16:01.230 }' 00:16:01.230 [2024-07-24 14:14:28.401661] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:01.230 [2024-07-24 14:14:28.401747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79122 ] 00:16:01.230 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.230 [2024-07-24 14:14:28.472581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.230 [2024-07-24 14:14:28.558334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.487 Running I/O for 10 seconds... 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=113 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 113 -ge 100 ']' 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:01.487 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:01.488 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.488 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.744 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.744 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:01.744 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.744 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:01.744 14:14:28 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.744 14:14:28 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:02.678 [2024-07-24 14:14:29.868680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.868980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.868996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.869010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.869040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.869084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:16:02.678 [2024-07-24 14:14:29.869114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.678 [2024-07-24 14:14:29.869494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:16:02.678 [2024-07-24 14:14:29.869507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:16:02.679 [2024-07-24 14:14:29.869534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:16:02.679 [2024-07-24 14:14:29.869562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:16:02.679 [2024-07-24 14:14:29.869593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:16:02.679 [2024-07-24 14:14:29.869622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:16:02.679 [2024-07-24 14:14:29.869650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:16:02.679 [2024-07-24 14:14:29.869678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:16:02.679 [2024-07-24 14:14:29.869705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba51000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba30000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.869986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.869999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c525000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c504000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e3000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c2000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a1000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c480000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc9f000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc7e000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc5d000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc3c000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1b000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.679 [2024-07-24 14:14:29.870514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x182400 00:16:02.679 [2024-07-24 14:14:29.870527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.680 [2024-07-24 14:14:29.870541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb34000 len:0x10000 key:0x182400 00:16:02.680 [2024-07-24 14:14:29.870555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.680 [2024-07-24 14:14:29.870569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb13000 len:0x10000 key:0x182400 00:16:02.680 [2024-07-24 14:14:29.870585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.680 [2024-07-24 14:14:29.870600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf2000 len:0x10000 key:0x182400 00:16:02.680 [2024-07-24 14:14:29.870614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.680 [2024-07-24 14:14:29.870629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad1000 len:0x10000 key:0x182400 00:16:02.680 [2024-07-24 14:14:29.870641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52420 cdw0:f1633000 sqhd:7889 p:1 m:0 dnr:0 00:16:02.680 [2024-07-24 14:14:29.872562] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 79122 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:02.680 { 00:16:02.680 "params": { 00:16:02.680 "name": "Nvme$subsystem", 00:16:02.680 "trtype": "$TEST_TRANSPORT", 00:16:02.680 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:02.680 "adrfam": "ipv4", 00:16:02.680 "trsvcid": "$NVMF_PORT", 00:16:02.680 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:02.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:02.680 "hdgst": ${hdgst:-false}, 00:16:02.680 "ddgst": ${ddgst:-false} 00:16:02.680 }, 00:16:02.680 "method": "bdev_nvme_attach_controller" 00:16:02.680 } 00:16:02.680 EOF 00:16:02.680 )") 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:02.680 14:14:29 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:02.680 "params": { 00:16:02.680 "name": "Nvme0", 00:16:02.680 "trtype": "rdma", 00:16:02.680 "traddr": "192.168.100.8", 00:16:02.680 "adrfam": "ipv4", 00:16:02.680 "trsvcid": "4420", 00:16:02.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:02.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:02.680 "hdgst": false, 00:16:02.680 "ddgst": false 00:16:02.680 }, 00:16:02.680 "method": "bdev_nvme_attach_controller" 00:16:02.680 }' 00:16:02.680 [2024-07-24 14:14:29.915431] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:02.680 [2024-07-24 14:14:29.915517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79273 ] 00:16:02.680 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.680 [2024-07-24 14:14:29.988289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.937 [2024-07-24 14:14:30.081143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.937 Running I/O for 1 seconds... 00:16:03.936 00:16:03.936 Latency(us) 00:16:03.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.936 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:03.936 Verification LBA range: start 0x0 length 0x400 00:16:03.936 Nvme0n1 : 1.01 2482.24 155.14 0.00 0.00 25237.32 1577.72 45244.11 00:16:03.936 =================================================================================================================== 00:16:03.937 Total : 2482.24 155.14 0.00 0.00 25237.32 1577.72 45244.11 00:16:04.204 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 79122 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:04.204 rmmod nvme_rdma 00:16:04.204 rmmod nvme_fabrics 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 78981 ']' 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 78981 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 78981 ']' 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 78981 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:04.204 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78981 00:16:04.464 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:04.464 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:04.464 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78981' 00:16:04.464 killing process with pid 78981 00:16:04.464 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 78981 00:16:04.464 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 78981 00:16:04.722 [2024-07-24 14:14:31.904010] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:04.722 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:04.722 14:14:31 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:04.722 14:14:31 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:04.722 00:16:04.722 real 0m6.807s 00:16:04.722 user 0m19.511s 00:16:04.722 sys 0m2.712s 00:16:04.722 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:04.722 14:14:31 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:04.722 ************************************ 00:16:04.722 END TEST nvmf_host_management 00:16:04.722 ************************************ 00:16:04.722 14:14:31 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:16:04.722 14:14:31 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:04.722 14:14:31 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:04.722 14:14:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:04.722 ************************************ 00:16:04.722 START TEST nvmf_lvol 00:16:04.722 ************************************ 00:16:04.722 14:14:31 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:16:04.722 * Looking for test storage... 00:16:04.722 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.722 14:14:32 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.250 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:16:07.251 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:16:07.251 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:16:07.251 Found net devices under 0000:81:00.0: mlx_0_0 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:16:07.251 Found net devices under 0000:81:00.1: mlx_0_1 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:07.251 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:07.251 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:16:07.251 altname enp129s0f0np0 00:16:07.251 inet 192.168.100.8/24 scope global mlx_0_0 00:16:07.251 valid_lft forever preferred_lft forever 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:07.251 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:07.251 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:16:07.251 altname enp129s0f1np1 00:16:07.251 inet 192.168.100.9/24 scope global mlx_0_1 00:16:07.251 valid_lft forever preferred_lft forever 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.251 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:07.252 192.168.100.9' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:07.252 192.168.100.9' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:07.252 192.168.100.9' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=81599 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 81599 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 81599 ']' 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:07.252 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:07.510 [2024-07-24 14:14:34.644185] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:07.510 [2024-07-24 14:14:34.644271] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.510 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.510 [2024-07-24 14:14:34.710623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:07.510 [2024-07-24 14:14:34.798104] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.510 [2024-07-24 14:14:34.798176] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.510 [2024-07-24 14:14:34.798189] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.510 [2024-07-24 14:14:34.798201] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.510 [2024-07-24 14:14:34.798212] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.510 [2024-07-24 14:14:34.798268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.510 [2024-07-24 14:14:34.798328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.510 [2024-07-24 14:14:34.798330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.768 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:07.768 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:07.768 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.768 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.768 14:14:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:07.768 14:14:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.768 14:14:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:08.026 [2024-07-24 14:14:35.233686] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x113af00/0x113f3b0) succeed. 00:16:08.026 [2024-07-24 14:14:35.244050] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x113c450/0x1180a40) succeed. 00:16:08.026 14:14:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.591 14:14:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:08.591 14:14:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:08.591 14:14:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:08.591 14:14:35 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:08.848 14:14:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:09.106 14:14:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3d506cc9-d3f8-4569-aacd-74819ea25288 00:16:09.106 14:14:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3d506cc9-d3f8-4569-aacd-74819ea25288 lvol 20 00:16:09.364 14:14:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=332c3b61-cef1-4666-b637-55be8e990423 00:16:09.364 14:14:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:09.622 14:14:36 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 332c3b61-cef1-4666-b637-55be8e990423 00:16:09.879 14:14:37 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:10.136 [2024-07-24 14:14:37.401973] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:10.136 14:14:37 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:10.394 14:14:37 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=81906 00:16:10.394 14:14:37 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:10.394 14:14:37 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:10.394 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.327 14:14:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 332c3b61-cef1-4666-b637-55be8e990423 MY_SNAPSHOT 00:16:11.585 14:14:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=12037031-a5c7-474e-a07d-576808153bda 00:16:11.585 14:14:38 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 332c3b61-cef1-4666-b637-55be8e990423 30 00:16:11.843 14:14:39 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 12037031-a5c7-474e-a07d-576808153bda MY_CLONE 00:16:12.100 14:14:39 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=03d60bd6-efdc-4c72-b4b7-78efc9f82cdb 00:16:12.100 14:14:39 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 03d60bd6-efdc-4c72-b4b7-78efc9f82cdb 00:16:12.358 14:14:39 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 81906 00:16:22.325 Initializing NVMe Controllers 00:16:22.325 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:16:22.325 Controller IO queue size 128, less than required. 00:16:22.325 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:22.325 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:22.325 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:22.325 Initialization complete. Launching workers. 00:16:22.325 ======================================================== 00:16:22.325 Latency(us) 00:16:22.325 Device Information : IOPS MiB/s Average min max 00:16:22.325 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14492.40 56.61 8835.55 3185.43 58564.91 00:16:22.325 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14528.50 56.75 8813.00 2971.77 46563.97 00:16:22.325 ======================================================== 00:16:22.325 Total : 29020.90 113.36 8824.26 2971.77 58564.91 00:16:22.325 00:16:22.325 14:14:49 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:22.325 14:14:49 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 332c3b61-cef1-4666-b637-55be8e990423 00:16:22.325 14:14:49 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d506cc9-d3f8-4569-aacd-74819ea25288 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:22.583 rmmod nvme_rdma 00:16:22.583 rmmod nvme_fabrics 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 81599 ']' 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 81599 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 81599 ']' 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 81599 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:22.583 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81599 00:16:22.841 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:22.841 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:22.841 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81599' 00:16:22.841 killing process with pid 81599 00:16:22.841 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 81599 00:16:22.841 14:14:49 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 81599 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:23.099 00:16:23.099 real 0m18.345s 00:16:23.099 user 1m12.858s 00:16:23.099 sys 0m3.051s 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:23.099 ************************************ 00:16:23.099 END TEST nvmf_lvol 00:16:23.099 ************************************ 00:16:23.099 14:14:50 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:16:23.099 14:14:50 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:23.099 14:14:50 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:23.099 14:14:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:23.099 ************************************ 00:16:23.099 START TEST nvmf_lvs_grow 00:16:23.099 ************************************ 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:16:23.099 * Looking for test storage... 00:16:23.099 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.099 14:14:50 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.100 14:14:50 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:25.631 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:16:25.632 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:16:25.632 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:16:25.632 Found net devices under 0000:81:00.0: mlx_0_0 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:16:25.632 Found net devices under 0000:81:00.1: mlx_0_1 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:25.632 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:25.632 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:16:25.632 altname enp129s0f0np0 00:16:25.632 inet 192.168.100.8/24 scope global mlx_0_0 00:16:25.632 valid_lft forever preferred_lft forever 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:25.632 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:25.633 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:25.633 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:16:25.633 altname enp129s0f1np1 00:16:25.633 inet 192.168.100.9/24 scope global mlx_0_1 00:16:25.633 valid_lft forever preferred_lft forever 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:25.633 192.168.100.9' 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:25.633 192.168.100.9' 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:25.633 192.168.100.9' 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:16:25.633 14:14:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:16:25.891 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:25.891 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:25.891 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:25.891 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:25.891 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=85433 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 85433 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 85433 ']' 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.892 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:25.892 [2024-07-24 14:14:53.061179] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:25.892 [2024-07-24 14:14:53.061244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.892 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.892 [2024-07-24 14:14:53.127505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.892 [2024-07-24 14:14:53.209878] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.892 [2024-07-24 14:14:53.209944] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.892 [2024-07-24 14:14:53.209968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.892 [2024-07-24 14:14:53.209979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.892 [2024-07-24 14:14:53.209989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.892 [2024-07-24 14:14:53.210019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.154 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.154 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:26.154 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.154 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.154 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:26.154 14:14:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.154 14:14:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:26.416 [2024-07-24 14:14:53.640920] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x169b970/0x169fe20) succeed. 00:16:26.416 [2024-07-24 14:14:53.653118] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x169ce20/0x16e14b0) succeed. 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:26.416 ************************************ 00:16:26.416 START TEST lvs_grow_clean 00:16:26.416 ************************************ 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:26.416 14:14:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:26.982 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:26.982 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:26.982 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:26.982 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:26.982 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:27.239 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:27.239 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:27.239 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 lvol 150 00:16:27.498 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=090e39ab-3c52-4e9b-9b0c-1f5a5243b065 00:16:27.498 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:27.498 14:14:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:27.756 [2024-07-24 14:14:55.055105] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:27.756 [2024-07-24 14:14:55.055209] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:27.756 true 00:16:27.756 14:14:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:27.756 14:14:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:28.013 14:14:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:28.014 14:14:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:28.271 14:14:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 090e39ab-3c52-4e9b-9b0c-1f5a5243b065 00:16:28.528 14:14:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:28.786 [2024-07-24 14:14:56.086367] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:28.786 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=85881 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 85881 /var/tmp/bdevperf.sock 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 85881 ']' 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:29.044 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:29.302 [2024-07-24 14:14:56.429142] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:29.302 [2024-07-24 14:14:56.429215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85881 ] 00:16:29.302 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.302 [2024-07-24 14:14:56.499549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.302 [2024-07-24 14:14:56.590324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.560 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:29.560 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:29.560 14:14:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:29.817 Nvme0n1 00:16:29.817 14:14:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:30.117 [ 00:16:30.117 { 00:16:30.117 "name": "Nvme0n1", 00:16:30.117 "aliases": [ 00:16:30.117 "090e39ab-3c52-4e9b-9b0c-1f5a5243b065" 00:16:30.117 ], 00:16:30.117 "product_name": "NVMe disk", 00:16:30.117 "block_size": 4096, 00:16:30.117 "num_blocks": 38912, 00:16:30.117 "uuid": "090e39ab-3c52-4e9b-9b0c-1f5a5243b065", 00:16:30.117 "assigned_rate_limits": { 00:16:30.117 "rw_ios_per_sec": 0, 00:16:30.117 "rw_mbytes_per_sec": 0, 00:16:30.117 "r_mbytes_per_sec": 0, 00:16:30.117 "w_mbytes_per_sec": 0 00:16:30.117 }, 00:16:30.117 "claimed": false, 00:16:30.117 "zoned": false, 00:16:30.117 "supported_io_types": { 00:16:30.117 "read": true, 00:16:30.117 "write": true, 00:16:30.117 "unmap": true, 00:16:30.117 "write_zeroes": true, 00:16:30.117 "flush": true, 00:16:30.117 "reset": true, 00:16:30.117 "compare": true, 00:16:30.117 "compare_and_write": true, 00:16:30.117 "abort": true, 00:16:30.117 "nvme_admin": true, 00:16:30.117 "nvme_io": true 00:16:30.117 }, 00:16:30.117 "memory_domains": [ 00:16:30.117 { 00:16:30.117 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:30.117 "dma_device_type": 0 00:16:30.117 } 00:16:30.117 ], 00:16:30.117 "driver_specific": { 00:16:30.117 "nvme": [ 00:16:30.117 { 00:16:30.117 "trid": { 00:16:30.117 "trtype": "RDMA", 00:16:30.117 "adrfam": "IPv4", 00:16:30.117 "traddr": "192.168.100.8", 00:16:30.117 "trsvcid": "4420", 00:16:30.117 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:30.117 }, 00:16:30.117 "ctrlr_data": { 00:16:30.117 "cntlid": 1, 00:16:30.117 "vendor_id": "0x8086", 00:16:30.117 "model_number": "SPDK bdev Controller", 00:16:30.117 "serial_number": "SPDK0", 00:16:30.117 "firmware_revision": "24.05.1", 00:16:30.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.117 "oacs": { 00:16:30.117 "security": 0, 00:16:30.117 "format": 0, 00:16:30.117 "firmware": 0, 00:16:30.117 "ns_manage": 0 00:16:30.117 }, 00:16:30.117 "multi_ctrlr": true, 00:16:30.117 "ana_reporting": false 00:16:30.117 }, 00:16:30.117 "vs": { 00:16:30.117 "nvme_version": "1.3" 00:16:30.117 }, 00:16:30.117 "ns_data": { 00:16:30.117 "id": 1, 00:16:30.117 "can_share": true 00:16:30.117 } 00:16:30.117 } 00:16:30.117 ], 00:16:30.117 "mp_policy": "active_passive" 00:16:30.117 } 00:16:30.117 } 00:16:30.117 ] 00:16:30.117 14:14:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=86012 00:16:30.117 14:14:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:30.117 14:14:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:30.377 Running I/O for 10 seconds... 00:16:31.311 Latency(us) 00:16:31.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.311 Nvme0n1 : 1.00 21732.00 84.89 0.00 0.00 0.00 0.00 0.00 00:16:31.311 =================================================================================================================== 00:16:31.311 Total : 21732.00 84.89 0.00 0.00 0.00 0.00 0.00 00:16:31.311 00:16:32.245 14:14:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:32.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.245 Nvme0n1 : 2.00 21536.50 84.13 0.00 0.00 0.00 0.00 0.00 00:16:32.245 =================================================================================================================== 00:16:32.245 Total : 21536.50 84.13 0.00 0.00 0.00 0.00 0.00 00:16:32.245 00:16:32.503 true 00:16:32.503 14:14:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:32.503 14:14:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:32.761 14:14:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:32.761 14:14:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:32.761 14:14:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 86012 00:16:33.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.326 Nvme0n1 : 3.00 21473.00 83.88 0.00 0.00 0.00 0.00 0.00 00:16:33.326 =================================================================================================================== 00:16:33.326 Total : 21473.00 83.88 0.00 0.00 0.00 0.00 0.00 00:16:33.326 00:16:34.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.260 Nvme0n1 : 4.00 21480.25 83.91 0.00 0.00 0.00 0.00 0.00 00:16:34.260 =================================================================================================================== 00:16:34.260 Total : 21480.25 83.91 0.00 0.00 0.00 0.00 0.00 00:16:34.260 00:16:35.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.194 Nvme0n1 : 5.00 21804.60 85.17 0.00 0.00 0.00 0.00 0.00 00:16:35.194 =================================================================================================================== 00:16:35.194 Total : 21804.60 85.17 0.00 0.00 0.00 0.00 0.00 00:16:35.194 00:16:36.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.135 Nvme0n1 : 6.00 21797.00 85.14 0.00 0.00 0.00 0.00 0.00 00:16:36.135 =================================================================================================================== 00:16:36.135 Total : 21797.00 85.14 0.00 0.00 0.00 0.00 0.00 00:16:36.135 00:16:37.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.506 Nvme0n1 : 7.00 22029.29 86.05 0.00 0.00 0.00 0.00 0.00 00:16:37.506 =================================================================================================================== 00:16:37.506 Total : 22029.29 86.05 0.00 0.00 0.00 0.00 0.00 00:16:37.506 00:16:38.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.438 Nvme0n1 : 8.00 22023.62 86.03 0.00 0.00 0.00 0.00 0.00 00:16:38.438 =================================================================================================================== 00:16:38.438 Total : 22023.62 86.03 0.00 0.00 0.00 0.00 0.00 00:16:38.438 00:16:39.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.370 Nvme0n1 : 9.00 22013.11 85.99 0.00 0.00 0.00 0.00 0.00 00:16:39.370 =================================================================================================================== 00:16:39.370 Total : 22013.11 85.99 0.00 0.00 0.00 0.00 0.00 00:16:39.370 00:16:40.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.301 Nvme0n1 : 10.00 22153.90 86.54 0.00 0.00 0.00 0.00 0.00 00:16:40.301 =================================================================================================================== 00:16:40.301 Total : 22153.90 86.54 0.00 0.00 0.00 0.00 0.00 00:16:40.301 00:16:40.301 00:16:40.301 Latency(us) 00:16:40.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.301 Nvme0n1 : 10.01 22155.41 86.54 0.00 0.00 5772.54 4271.98 16214.09 00:16:40.301 =================================================================================================================== 00:16:40.301 Total : 22155.41 86.54 0.00 0.00 5772.54 4271.98 16214.09 00:16:40.301 0 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 85881 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 85881 ']' 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 85881 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85881 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85881' 00:16:40.301 killing process with pid 85881 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 85881 00:16:40.301 Received shutdown signal, test time was about 10.000000 seconds 00:16:40.301 00:16:40.301 Latency(us) 00:16:40.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.301 =================================================================================================================== 00:16:40.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.301 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 85881 00:16:40.558 14:15:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:40.815 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:41.072 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:41.072 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:41.329 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:41.329 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:41.330 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:41.587 [2024-07-24 14:15:08.815737] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:16:41.587 14:15:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:41.844 request: 00:16:41.844 { 00:16:41.844 "uuid": "003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06", 00:16:41.844 "method": "bdev_lvol_get_lvstores", 00:16:41.844 "req_id": 1 00:16:41.844 } 00:16:41.844 Got JSON-RPC error response 00:16:41.844 response: 00:16:41.844 { 00:16:41.844 "code": -19, 00:16:41.844 "message": "No such device" 00:16:41.844 } 00:16:41.844 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:41.844 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:41.844 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:41.844 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:41.844 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:42.101 aio_bdev 00:16:42.101 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 090e39ab-3c52-4e9b-9b0c-1f5a5243b065 00:16:42.101 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=090e39ab-3c52-4e9b-9b0c-1f5a5243b065 00:16:42.101 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:42.101 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:16:42.102 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:42.102 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:42.102 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:42.359 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 090e39ab-3c52-4e9b-9b0c-1f5a5243b065 -t 2000 00:16:42.617 [ 00:16:42.617 { 00:16:42.617 "name": "090e39ab-3c52-4e9b-9b0c-1f5a5243b065", 00:16:42.617 "aliases": [ 00:16:42.617 "lvs/lvol" 00:16:42.617 ], 00:16:42.617 "product_name": "Logical Volume", 00:16:42.617 "block_size": 4096, 00:16:42.617 "num_blocks": 38912, 00:16:42.617 "uuid": "090e39ab-3c52-4e9b-9b0c-1f5a5243b065", 00:16:42.617 "assigned_rate_limits": { 00:16:42.617 "rw_ios_per_sec": 0, 00:16:42.617 "rw_mbytes_per_sec": 0, 00:16:42.617 "r_mbytes_per_sec": 0, 00:16:42.617 "w_mbytes_per_sec": 0 00:16:42.617 }, 00:16:42.617 "claimed": false, 00:16:42.617 "zoned": false, 00:16:42.617 "supported_io_types": { 00:16:42.617 "read": true, 00:16:42.617 "write": true, 00:16:42.617 "unmap": true, 00:16:42.617 "write_zeroes": true, 00:16:42.617 "flush": false, 00:16:42.617 "reset": true, 00:16:42.617 "compare": false, 00:16:42.617 "compare_and_write": false, 00:16:42.617 "abort": false, 00:16:42.617 "nvme_admin": false, 00:16:42.617 "nvme_io": false 00:16:42.617 }, 00:16:42.617 "driver_specific": { 00:16:42.617 "lvol": { 00:16:42.617 "lvol_store_uuid": "003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06", 00:16:42.617 "base_bdev": "aio_bdev", 00:16:42.617 "thin_provision": false, 00:16:42.617 "num_allocated_clusters": 38, 00:16:42.617 "snapshot": false, 00:16:42.617 "clone": false, 00:16:42.617 "esnap_clone": false 00:16:42.617 } 00:16:42.617 } 00:16:42.617 } 00:16:42.617 ] 00:16:42.617 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:16:42.617 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:42.617 14:15:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:42.874 14:15:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:42.874 14:15:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:42.874 14:15:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:43.132 14:15:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:43.132 14:15:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 090e39ab-3c52-4e9b-9b0c-1f5a5243b065 00:16:43.389 14:15:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 003fdae5-3ff7-4cd0-a3e0-0c8dbdb2fc06 00:16:43.645 14:15:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:43.903 00:16:43.903 real 0m17.415s 00:16:43.903 user 0m17.405s 00:16:43.903 sys 0m1.323s 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 ************************************ 00:16:43.903 END TEST lvs_grow_clean 00:16:43.903 ************************************ 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 ************************************ 00:16:43.903 START TEST lvs_grow_dirty 00:16:43.903 ************************************ 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:43.903 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:44.160 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:44.160 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:44.455 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:16:44.455 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:44.455 14:15:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:16:44.713 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:44.713 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:44.713 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea lvol 150 00:16:44.970 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=589724ab-a7c9-461a-a0b4-ba7e3838df10 00:16:44.970 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:44.971 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:45.228 [2024-07-24 14:15:12.506052] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:45.228 [2024-07-24 14:15:12.506147] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:45.228 true 00:16:45.228 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:16:45.228 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:45.485 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:45.485 14:15:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:45.742 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 589724ab-a7c9-461a-a0b4-ba7e3838df10 00:16:46.000 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:46.255 [2024-07-24 14:15:13.625594] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:46.512 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=88540 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 88540 /var/tmp/bdevperf.sock 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 88540 ']' 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:46.770 14:15:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:46.770 [2024-07-24 14:15:13.932644] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:46.770 [2024-07-24 14:15:13.932733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88540 ] 00:16:46.770 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.770 [2024-07-24 14:15:14.000395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.770 [2024-07-24 14:15:14.089463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.027 14:15:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:47.027 14:15:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:47.027 14:15:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:47.285 Nvme0n1 00:16:47.285 14:15:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:47.543 [ 00:16:47.543 { 00:16:47.543 "name": "Nvme0n1", 00:16:47.543 "aliases": [ 00:16:47.543 "589724ab-a7c9-461a-a0b4-ba7e3838df10" 00:16:47.543 ], 00:16:47.543 "product_name": "NVMe disk", 00:16:47.543 "block_size": 4096, 00:16:47.543 "num_blocks": 38912, 00:16:47.543 "uuid": "589724ab-a7c9-461a-a0b4-ba7e3838df10", 00:16:47.543 "assigned_rate_limits": { 00:16:47.543 "rw_ios_per_sec": 0, 00:16:47.543 "rw_mbytes_per_sec": 0, 00:16:47.543 "r_mbytes_per_sec": 0, 00:16:47.543 "w_mbytes_per_sec": 0 00:16:47.543 }, 00:16:47.543 "claimed": false, 00:16:47.543 "zoned": false, 00:16:47.543 "supported_io_types": { 00:16:47.543 "read": true, 00:16:47.543 "write": true, 00:16:47.543 "unmap": true, 00:16:47.543 "write_zeroes": true, 00:16:47.543 "flush": true, 00:16:47.543 "reset": true, 00:16:47.543 "compare": true, 00:16:47.543 "compare_and_write": true, 00:16:47.543 "abort": true, 00:16:47.543 "nvme_admin": true, 00:16:47.543 "nvme_io": true 00:16:47.543 }, 00:16:47.543 "memory_domains": [ 00:16:47.543 { 00:16:47.543 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:47.543 "dma_device_type": 0 00:16:47.543 } 00:16:47.543 ], 00:16:47.543 "driver_specific": { 00:16:47.543 "nvme": [ 00:16:47.543 { 00:16:47.543 "trid": { 00:16:47.543 "trtype": "RDMA", 00:16:47.543 "adrfam": "IPv4", 00:16:47.543 "traddr": "192.168.100.8", 00:16:47.543 "trsvcid": "4420", 00:16:47.543 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:47.543 }, 00:16:47.543 "ctrlr_data": { 00:16:47.543 "cntlid": 1, 00:16:47.543 "vendor_id": "0x8086", 00:16:47.543 "model_number": "SPDK bdev Controller", 00:16:47.543 "serial_number": "SPDK0", 00:16:47.543 "firmware_revision": "24.05.1", 00:16:47.543 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:47.543 "oacs": { 00:16:47.543 "security": 0, 00:16:47.543 "format": 0, 00:16:47.543 "firmware": 0, 00:16:47.543 "ns_manage": 0 00:16:47.543 }, 00:16:47.543 "multi_ctrlr": true, 00:16:47.543 "ana_reporting": false 00:16:47.543 }, 00:16:47.543 "vs": { 00:16:47.543 "nvme_version": "1.3" 00:16:47.543 }, 00:16:47.543 "ns_data": { 00:16:47.543 "id": 1, 00:16:47.543 "can_share": true 00:16:47.543 } 00:16:47.543 } 00:16:47.543 ], 00:16:47.543 "mp_policy": "active_passive" 00:16:47.543 } 00:16:47.543 } 00:16:47.543 ] 00:16:47.543 14:15:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=88671 00:16:47.543 14:15:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:47.543 14:15:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:47.543 Running I/O for 10 seconds... 00:16:48.916 Latency(us) 00:16:48.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.916 Nvme0n1 : 1.00 20865.00 81.50 0.00 0.00 0.00 0.00 0.00 00:16:48.916 =================================================================================================================== 00:16:48.916 Total : 20865.00 81.50 0.00 0.00 0.00 0.00 0.00 00:16:48.916 00:16:49.483 14:15:16 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:16:49.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.741 Nvme0n1 : 2.00 21009.00 82.07 0.00 0.00 0.00 0.00 0.00 00:16:49.741 =================================================================================================================== 00:16:49.741 Total : 21009.00 82.07 0.00 0.00 0.00 0.00 0.00 00:16:49.741 00:16:49.741 true 00:16:49.999 14:15:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:16:49.999 14:15:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:50.257 14:15:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:50.257 14:15:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:50.257 14:15:17 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 88671 00:16:50.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.823 Nvme0n1 : 3.00 21387.00 83.54 0.00 0.00 0.00 0.00 0.00 00:16:50.823 =================================================================================================================== 00:16:50.823 Total : 21387.00 83.54 0.00 0.00 0.00 0.00 0.00 00:16:50.823 00:16:51.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.756 Nvme0n1 : 4.00 21448.00 83.78 0.00 0.00 0.00 0.00 0.00 00:16:51.756 =================================================================================================================== 00:16:51.756 Total : 21448.00 83.78 0.00 0.00 0.00 0.00 0.00 00:16:51.756 00:16:52.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.690 Nvme0n1 : 5.00 21510.00 84.02 0.00 0.00 0.00 0.00 0.00 00:16:52.690 =================================================================================================================== 00:16:52.690 Total : 21510.00 84.02 0.00 0.00 0.00 0.00 0.00 00:16:52.690 00:16:53.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.622 Nvme0n1 : 6.00 21525.83 84.09 0.00 0.00 0.00 0.00 0.00 00:16:53.622 =================================================================================================================== 00:16:53.622 Total : 21525.83 84.09 0.00 0.00 0.00 0.00 0.00 00:16:53.622 00:16:54.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.556 Nvme0n1 : 7.00 21805.71 85.18 0.00 0.00 0.00 0.00 0.00 00:16:54.556 =================================================================================================================== 00:16:54.556 Total : 21805.71 85.18 0.00 0.00 0.00 0.00 0.00 00:16:54.556 00:16:55.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.930 Nvme0n1 : 8.00 21956.25 85.77 0.00 0.00 0.00 0.00 0.00 00:16:55.930 =================================================================================================================== 00:16:55.930 Total : 21956.25 85.77 0.00 0.00 0.00 0.00 0.00 00:16:55.930 00:16:56.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.862 Nvme0n1 : 9.00 22030.00 86.05 0.00 0.00 0.00 0.00 0.00 00:16:56.862 =================================================================================================================== 00:16:56.862 Total : 22030.00 86.05 0.00 0.00 0.00 0.00 0.00 00:16:56.862 00:16:57.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.795 Nvme0n1 : 10.00 22198.20 86.71 0.00 0.00 0.00 0.00 0.00 00:16:57.795 =================================================================================================================== 00:16:57.795 Total : 22198.20 86.71 0.00 0.00 0.00 0.00 0.00 00:16:57.795 00:16:57.795 00:16:57.795 Latency(us) 00:16:57.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.795 Nvme0n1 : 10.01 22198.08 86.71 0.00 0.00 5761.33 3835.07 13107.20 00:16:57.795 =================================================================================================================== 00:16:57.795 Total : 22198.08 86.71 0.00 0.00 5761.33 3835.07 13107.20 00:16:57.795 0 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 88540 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 88540 ']' 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 88540 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88540 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88540' 00:16:57.795 killing process with pid 88540 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 88540 00:16:57.795 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.795 00:16:57.795 Latency(us) 00:16:57.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.795 =================================================================================================================== 00:16:57.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.795 14:15:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 88540 00:16:58.052 14:15:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:58.309 14:15:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:58.567 14:15:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:16:58.567 14:15:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 85433 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 85433 00:16:58.825 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 85433 Killed "${NVMF_APP[@]}" "$@" 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=89997 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 89997 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 89997 ']' 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:58.825 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:58.825 [2024-07-24 14:15:26.103582] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:16:58.825 [2024-07-24 14:15:26.103664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.825 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.825 [2024-07-24 14:15:26.169827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.083 [2024-07-24 14:15:26.253317] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.083 [2024-07-24 14:15:26.253381] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.083 [2024-07-24 14:15:26.253405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.083 [2024-07-24 14:15:26.253416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.083 [2024-07-24 14:15:26.253426] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.083 [2024-07-24 14:15:26.253451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.083 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:59.083 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:59.083 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.083 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.083 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:59.083 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.083 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:59.340 [2024-07-24 14:15:26.615158] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:59.340 [2024-07-24 14:15:26.615304] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:59.340 [2024-07-24 14:15:26.615360] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:59.340 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:59.340 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 589724ab-a7c9-461a-a0b4-ba7e3838df10 00:16:59.340 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=589724ab-a7c9-461a-a0b4-ba7e3838df10 00:16:59.340 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:59.340 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:16:59.340 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:59.340 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:59.341 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:59.598 14:15:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 589724ab-a7c9-461a-a0b4-ba7e3838df10 -t 2000 00:16:59.855 [ 00:16:59.855 { 00:16:59.855 "name": "589724ab-a7c9-461a-a0b4-ba7e3838df10", 00:16:59.855 "aliases": [ 00:16:59.855 "lvs/lvol" 00:16:59.855 ], 00:16:59.855 "product_name": "Logical Volume", 00:16:59.855 "block_size": 4096, 00:16:59.855 "num_blocks": 38912, 00:16:59.855 "uuid": "589724ab-a7c9-461a-a0b4-ba7e3838df10", 00:16:59.855 "assigned_rate_limits": { 00:16:59.855 "rw_ios_per_sec": 0, 00:16:59.855 "rw_mbytes_per_sec": 0, 00:16:59.855 "r_mbytes_per_sec": 0, 00:16:59.855 "w_mbytes_per_sec": 0 00:16:59.855 }, 00:16:59.855 "claimed": false, 00:16:59.855 "zoned": false, 00:16:59.855 "supported_io_types": { 00:16:59.855 "read": true, 00:16:59.855 "write": true, 00:16:59.855 "unmap": true, 00:16:59.855 "write_zeroes": true, 00:16:59.855 "flush": false, 00:16:59.856 "reset": true, 00:16:59.856 "compare": false, 00:16:59.856 "compare_and_write": false, 00:16:59.856 "abort": false, 00:16:59.856 "nvme_admin": false, 00:16:59.856 "nvme_io": false 00:16:59.856 }, 00:16:59.856 "driver_specific": { 00:16:59.856 "lvol": { 00:16:59.856 "lvol_store_uuid": "f664d4a9-46c4-439c-a3d1-64c2811ff2ea", 00:16:59.856 "base_bdev": "aio_bdev", 00:16:59.856 "thin_provision": false, 00:16:59.856 "num_allocated_clusters": 38, 00:16:59.856 "snapshot": false, 00:16:59.856 "clone": false, 00:16:59.856 "esnap_clone": false 00:16:59.856 } 00:16:59.856 } 00:16:59.856 } 00:16:59.856 ] 00:16:59.856 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:16:59.856 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:16:59.856 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:00.114 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:00.114 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:17:00.114 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:00.382 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:00.382 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:00.673 [2024-07-24 14:15:27.876033] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:17:00.673 14:15:27 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:17:00.931 request: 00:17:00.931 { 00:17:00.931 "uuid": "f664d4a9-46c4-439c-a3d1-64c2811ff2ea", 00:17:00.931 "method": "bdev_lvol_get_lvstores", 00:17:00.931 "req_id": 1 00:17:00.931 } 00:17:00.931 Got JSON-RPC error response 00:17:00.931 response: 00:17:00.931 { 00:17:00.931 "code": -19, 00:17:00.931 "message": "No such device" 00:17:00.931 } 00:17:00.931 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:00.931 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.931 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.931 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.931 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:01.189 aio_bdev 00:17:01.189 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 589724ab-a7c9-461a-a0b4-ba7e3838df10 00:17:01.189 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=589724ab-a7c9-461a-a0b4-ba7e3838df10 00:17:01.189 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:01.189 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:01.189 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:01.189 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:01.189 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:01.447 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 589724ab-a7c9-461a-a0b4-ba7e3838df10 -t 2000 00:17:01.706 [ 00:17:01.706 { 00:17:01.706 "name": "589724ab-a7c9-461a-a0b4-ba7e3838df10", 00:17:01.706 "aliases": [ 00:17:01.706 "lvs/lvol" 00:17:01.706 ], 00:17:01.706 "product_name": "Logical Volume", 00:17:01.706 "block_size": 4096, 00:17:01.706 "num_blocks": 38912, 00:17:01.706 "uuid": "589724ab-a7c9-461a-a0b4-ba7e3838df10", 00:17:01.706 "assigned_rate_limits": { 00:17:01.706 "rw_ios_per_sec": 0, 00:17:01.706 "rw_mbytes_per_sec": 0, 00:17:01.706 "r_mbytes_per_sec": 0, 00:17:01.706 "w_mbytes_per_sec": 0 00:17:01.706 }, 00:17:01.706 "claimed": false, 00:17:01.706 "zoned": false, 00:17:01.706 "supported_io_types": { 00:17:01.706 "read": true, 00:17:01.706 "write": true, 00:17:01.706 "unmap": true, 00:17:01.706 "write_zeroes": true, 00:17:01.706 "flush": false, 00:17:01.706 "reset": true, 00:17:01.706 "compare": false, 00:17:01.706 "compare_and_write": false, 00:17:01.706 "abort": false, 00:17:01.706 "nvme_admin": false, 00:17:01.706 "nvme_io": false 00:17:01.706 }, 00:17:01.706 "driver_specific": { 00:17:01.706 "lvol": { 00:17:01.706 "lvol_store_uuid": "f664d4a9-46c4-439c-a3d1-64c2811ff2ea", 00:17:01.706 "base_bdev": "aio_bdev", 00:17:01.706 "thin_provision": false, 00:17:01.706 "num_allocated_clusters": 38, 00:17:01.706 "snapshot": false, 00:17:01.706 "clone": false, 00:17:01.706 "esnap_clone": false 00:17:01.706 } 00:17:01.706 } 00:17:01.706 } 00:17:01.706 ] 00:17:01.706 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:01.706 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:17:01.706 14:15:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:01.965 14:15:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:01.965 14:15:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:17:01.965 14:15:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:02.222 14:15:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:02.222 14:15:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 589724ab-a7c9-461a-a0b4-ba7e3838df10 00:17:02.479 14:15:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f664d4a9-46c4-439c-a3d1-64c2811ff2ea 00:17:02.737 14:15:29 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:02.995 00:17:02.995 real 0m18.964s 00:17:02.995 user 0m48.893s 00:17:02.995 sys 0m3.976s 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:02.995 ************************************ 00:17:02.995 END TEST lvs_grow_dirty 00:17:02.995 ************************************ 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:02.995 nvmf_trace.0 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:02.995 rmmod nvme_rdma 00:17:02.995 rmmod nvme_fabrics 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 89997 ']' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 89997 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 89997 ']' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 89997 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89997 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89997' 00:17:02.995 killing process with pid 89997 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 89997 00:17:02.995 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 89997 00:17:03.254 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.254 14:15:30 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:03.254 00:17:03.254 real 0m40.172s 00:17:03.254 user 1m12.068s 00:17:03.254 sys 0m7.460s 00:17:03.254 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:03.254 14:15:30 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:03.254 ************************************ 00:17:03.254 END TEST nvmf_lvs_grow 00:17:03.254 ************************************ 00:17:03.254 14:15:30 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:17:03.254 14:15:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:03.254 14:15:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:03.254 14:15:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:03.254 ************************************ 00:17:03.254 START TEST nvmf_bdev_io_wait 00:17:03.254 ************************************ 00:17:03.254 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:17:03.512 * Looking for test storage... 00:17:03.512 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.512 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.513 14:15:30 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.044 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:06.045 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:06.045 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:06.045 Found net devices under 0000:81:00.0: mlx_0_0 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:06.045 Found net devices under 0000:81:00.1: mlx_0_1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:06.045 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.045 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:06.045 altname enp129s0f0np0 00:17:06.045 inet 192.168.100.8/24 scope global mlx_0_0 00:17:06.045 valid_lft forever preferred_lft forever 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:06.045 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:06.045 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:06.045 altname enp129s0f1np1 00:17:06.045 inet 192.168.100.9/24 scope global mlx_0_1 00:17:06.045 valid_lft forever preferred_lft forever 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:06.045 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:06.046 192.168.100.9' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:06.046 192.168.100.9' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:06.046 192.168.100.9' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=92522 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 92522 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 92522 ']' 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.046 [2024-07-24 14:15:33.187313] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:06.046 [2024-07-24 14:15:33.187409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.046 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.046 [2024-07-24 14:15:33.255156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.046 [2024-07-24 14:15:33.344296] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.046 [2024-07-24 14:15:33.344375] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.046 [2024-07-24 14:15:33.344388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.046 [2024-07-24 14:15:33.344399] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.046 [2024-07-24 14:15:33.344408] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.046 [2024-07-24 14:15:33.344489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.046 [2024-07-24 14:15:33.344555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.046 [2024-07-24 14:15:33.344620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.046 [2024-07-24 14:15:33.344622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.046 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.303 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.304 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.304 [2024-07-24 14:15:33.526580] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18d5940/0x18d9e10) succeed. 00:17:06.304 [2024-07-24 14:15:33.537064] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18d6f30/0x191b4a0) succeed. 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.562 Malloc0 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:06.562 [2024-07-24 14:15:33.736819] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=92677 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:06.562 { 00:17:06.562 "params": { 00:17:06.562 "name": "Nvme$subsystem", 00:17:06.562 "trtype": "$TEST_TRANSPORT", 00:17:06.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.562 "adrfam": "ipv4", 00:17:06.562 "trsvcid": "$NVMF_PORT", 00:17:06.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.562 "hdgst": ${hdgst:-false}, 00:17:06.562 "ddgst": ${ddgst:-false} 00:17:06.562 }, 00:17:06.562 "method": "bdev_nvme_attach_controller" 00:17:06.562 } 00:17:06.562 EOF 00:17:06.562 )") 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=92679 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:06.562 { 00:17:06.562 "params": { 00:17:06.562 "name": "Nvme$subsystem", 00:17:06.562 "trtype": "$TEST_TRANSPORT", 00:17:06.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.562 "adrfam": "ipv4", 00:17:06.562 "trsvcid": "$NVMF_PORT", 00:17:06.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.562 "hdgst": ${hdgst:-false}, 00:17:06.562 "ddgst": ${ddgst:-false} 00:17:06.562 }, 00:17:06.562 "method": "bdev_nvme_attach_controller" 00:17:06.562 } 00:17:06.562 EOF 00:17:06.562 )") 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=92682 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=92686 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:06.562 { 00:17:06.562 "params": { 00:17:06.562 "name": "Nvme$subsystem", 00:17:06.562 "trtype": "$TEST_TRANSPORT", 00:17:06.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.562 "adrfam": "ipv4", 00:17:06.562 "trsvcid": "$NVMF_PORT", 00:17:06.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.562 "hdgst": ${hdgst:-false}, 00:17:06.562 "ddgst": ${ddgst:-false} 00:17:06.562 }, 00:17:06.562 "method": "bdev_nvme_attach_controller" 00:17:06.562 } 00:17:06.562 EOF 00:17:06.562 )") 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:06.562 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:06.563 { 00:17:06.563 "params": { 00:17:06.563 "name": "Nvme$subsystem", 00:17:06.563 "trtype": "$TEST_TRANSPORT", 00:17:06.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.563 "adrfam": "ipv4", 00:17:06.563 "trsvcid": "$NVMF_PORT", 00:17:06.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.563 "hdgst": ${hdgst:-false}, 00:17:06.563 "ddgst": ${ddgst:-false} 00:17:06.563 }, 00:17:06.563 "method": "bdev_nvme_attach_controller" 00:17:06.563 } 00:17:06.563 EOF 00:17:06.563 )") 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 92677 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:06.563 "params": { 00:17:06.563 "name": "Nvme1", 00:17:06.563 "trtype": "rdma", 00:17:06.563 "traddr": "192.168.100.8", 00:17:06.563 "adrfam": "ipv4", 00:17:06.563 "trsvcid": "4420", 00:17:06.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.563 "hdgst": false, 00:17:06.563 "ddgst": false 00:17:06.563 }, 00:17:06.563 "method": "bdev_nvme_attach_controller" 00:17:06.563 }' 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:06.563 "params": { 00:17:06.563 "name": "Nvme1", 00:17:06.563 "trtype": "rdma", 00:17:06.563 "traddr": "192.168.100.8", 00:17:06.563 "adrfam": "ipv4", 00:17:06.563 "trsvcid": "4420", 00:17:06.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.563 "hdgst": false, 00:17:06.563 "ddgst": false 00:17:06.563 }, 00:17:06.563 "method": "bdev_nvme_attach_controller" 00:17:06.563 }' 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:06.563 "params": { 00:17:06.563 "name": "Nvme1", 00:17:06.563 "trtype": "rdma", 00:17:06.563 "traddr": "192.168.100.8", 00:17:06.563 "adrfam": "ipv4", 00:17:06.563 "trsvcid": "4420", 00:17:06.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.563 "hdgst": false, 00:17:06.563 "ddgst": false 00:17:06.563 }, 00:17:06.563 "method": "bdev_nvme_attach_controller" 00:17:06.563 }' 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:06.563 14:15:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:06.563 "params": { 00:17:06.563 "name": "Nvme1", 00:17:06.563 "trtype": "rdma", 00:17:06.563 "traddr": "192.168.100.8", 00:17:06.563 "adrfam": "ipv4", 00:17:06.563 "trsvcid": "4420", 00:17:06.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.563 "hdgst": false, 00:17:06.563 "ddgst": false 00:17:06.563 }, 00:17:06.563 "method": "bdev_nvme_attach_controller" 00:17:06.563 }' 00:17:06.563 [2024-07-24 14:15:33.779515] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:06.563 [2024-07-24 14:15:33.779517] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:06.563 [2024-07-24 14:15:33.779600] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 14:15:33.779601] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:06.563 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:06.563 [2024-07-24 14:15:33.780128] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:06.563 [2024-07-24 14:15:33.780185] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:06.563 [2024-07-24 14:15:33.781695] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:06.563 [2024-07-24 14:15:33.781767] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:06.563 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.821 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.821 [2024-07-24 14:15:33.965711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.821 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.821 [2024-07-24 14:15:34.041715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.821 [2024-07-24 14:15:34.067748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.821 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.821 [2024-07-24 14:15:34.143577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.821 [2024-07-24 14:15:34.167219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.079 [2024-07-24 14:15:34.239089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.079 [2024-07-24 14:15:34.246198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:07.079 [2024-07-24 14:15:34.309970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:07.079 Running I/O for 1 seconds... 00:17:07.079 Running I/O for 1 seconds... 00:17:07.337 Running I/O for 1 seconds... 00:17:07.337 Running I/O for 1 seconds... 00:17:08.271 00:17:08.271 Latency(us) 00:17:08.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.271 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:08.271 Nvme1n1 : 1.01 15016.06 58.66 0.00 0.00 8494.27 5728.33 22136.60 00:17:08.271 =================================================================================================================== 00:17:08.271 Total : 15016.06 58.66 0.00 0.00 8494.27 5728.33 22136.60 00:17:08.271 00:17:08.271 Latency(us) 00:17:08.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.271 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:08.271 Nvme1n1 : 1.01 13560.93 52.97 0.00 0.00 9402.36 6407.96 17961.72 00:17:08.271 =================================================================================================================== 00:17:08.271 Total : 13560.93 52.97 0.00 0.00 9402.36 6407.96 17961.72 00:17:08.271 00:17:08.271 Latency(us) 00:17:08.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.271 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:08.271 Nvme1n1 : 1.00 16284.40 63.61 0.00 0.00 7837.69 4271.98 20291.89 00:17:08.271 =================================================================================================================== 00:17:08.271 Total : 16284.40 63.61 0.00 0.00 7837.69 4271.98 20291.89 00:17:08.271 00:17:08.271 Latency(us) 00:17:08.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.271 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:08.271 Nvme1n1 : 1.00 202823.70 792.28 0.00 0.00 628.62 254.86 2378.71 00:17:08.271 =================================================================================================================== 00:17:08.271 Total : 202823.70 792.28 0.00 0.00 628.62 254.86 2378.71 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 92679 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 92682 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 92686 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:08.529 rmmod nvme_rdma 00:17:08.529 rmmod nvme_fabrics 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 92522 ']' 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 92522 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 92522 ']' 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 92522 00:17:08.529 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:08.530 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:08.530 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92522 00:17:08.787 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:08.787 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:08.787 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92522' 00:17:08.787 killing process with pid 92522 00:17:08.787 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 92522 00:17:08.787 14:15:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 92522 00:17:09.046 14:15:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.046 14:15:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:09.046 00:17:09.046 real 0m5.617s 00:17:09.046 user 0m17.796s 00:17:09.046 sys 0m3.051s 00:17:09.046 14:15:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:09.046 14:15:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:09.046 ************************************ 00:17:09.046 END TEST nvmf_bdev_io_wait 00:17:09.046 ************************************ 00:17:09.046 14:15:36 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:17:09.046 14:15:36 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:09.046 14:15:36 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:09.046 14:15:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:09.046 ************************************ 00:17:09.046 START TEST nvmf_queue_depth 00:17:09.046 ************************************ 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:17:09.046 * Looking for test storage... 00:17:09.046 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:09.046 14:15:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.047 14:15:36 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:11.576 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:11.576 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:11.576 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:11.577 Found net devices under 0000:81:00.0: mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:11.577 Found net devices under 0000:81:00.1: mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:11.577 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:11.577 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:11.577 altname enp129s0f0np0 00:17:11.577 inet 192.168.100.8/24 scope global mlx_0_0 00:17:11.577 valid_lft forever preferred_lft forever 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:11.577 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:11.577 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:11.577 altname enp129s0f1np1 00:17:11.577 inet 192.168.100.9/24 scope global mlx_0_1 00:17:11.577 valid_lft forever preferred_lft forever 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:11.577 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:11.578 192.168.100.9' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:11.578 192.168.100.9' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:11.578 192.168.100.9' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=94905 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 94905 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 94905 ']' 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:11.578 14:15:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.578 [2024-07-24 14:15:38.850161] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:11.578 [2024-07-24 14:15:38.850244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.578 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.578 [2024-07-24 14:15:38.919763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.836 [2024-07-24 14:15:39.010690] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.836 [2024-07-24 14:15:39.010768] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.836 [2024-07-24 14:15:39.010785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.836 [2024-07-24 14:15:39.010810] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.836 [2024-07-24 14:15:39.010823] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.836 [2024-07-24 14:15:39.010880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.836 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:11.836 [2024-07-24 14:15:39.179857] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1038c70/0x103d120) succeed. 00:17:11.836 [2024-07-24 14:15:39.192124] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x103a120/0x107e7b0) succeed. 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.094 Malloc0 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.094 [2024-07-24 14:15:39.288870] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=95042 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 95042 /var/tmp/bdevperf.sock 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 95042 ']' 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:12.094 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.094 [2024-07-24 14:15:39.331251] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:12.094 [2024-07-24 14:15:39.331315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95042 ] 00:17:12.094 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.094 [2024-07-24 14:15:39.400551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.352 [2024-07-24 14:15:39.490568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.352 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:12.352 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:12.352 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:12.352 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.352 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:12.352 NVMe0n1 00:17:12.352 14:15:39 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.352 14:15:39 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.610 Running I/O for 10 seconds... 00:17:22.575 00:17:22.575 Latency(us) 00:17:22.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.575 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:22.575 Verification LBA range: start 0x0 length 0x4000 00:17:22.575 NVMe0n1 : 10.05 12766.02 49.87 0.00 0.00 79877.27 11165.39 48545.19 00:17:22.576 =================================================================================================================== 00:17:22.576 Total : 12766.02 49.87 0.00 0.00 79877.27 11165.39 48545.19 00:17:22.576 0 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 95042 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 95042 ']' 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 95042 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95042 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95042' 00:17:22.576 killing process with pid 95042 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 95042 00:17:22.576 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.576 00:17:22.576 Latency(us) 00:17:22.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.576 =================================================================================================================== 00:17:22.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.576 14:15:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 95042 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:22.872 rmmod nvme_rdma 00:17:22.872 rmmod nvme_fabrics 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 94905 ']' 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 94905 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 94905 ']' 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 94905 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.872 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94905 00:17:23.129 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:23.129 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:23.129 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94905' 00:17:23.129 killing process with pid 94905 00:17:23.129 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 94905 00:17:23.129 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 94905 00:17:23.387 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.387 14:15:50 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:23.387 00:17:23.387 real 0m14.270s 00:17:23.387 user 0m23.462s 00:17:23.387 sys 0m2.364s 00:17:23.387 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:23.387 14:15:50 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.387 ************************************ 00:17:23.387 END TEST nvmf_queue_depth 00:17:23.387 ************************************ 00:17:23.387 14:15:50 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:17:23.387 14:15:50 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:23.387 14:15:50 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:23.387 14:15:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:23.387 ************************************ 00:17:23.387 START TEST nvmf_target_multipath 00:17:23.387 ************************************ 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:17:23.387 * Looking for test storage... 00:17:23.387 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.387 14:15:50 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:25.918 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:25.919 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:25.919 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:25.919 Found net devices under 0000:81:00.0: mlx_0_0 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:25.919 Found net devices under 0000:81:00.1: mlx_0_1 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:25.919 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:25.919 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:25.919 altname enp129s0f0np0 00:17:25.919 inet 192.168.100.8/24 scope global mlx_0_0 00:17:25.919 valid_lft forever preferred_lft forever 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:25.919 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:25.919 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:25.919 altname enp129s0f1np1 00:17:25.919 inet 192.168.100.9/24 scope global mlx_0_1 00:17:25.919 valid_lft forever preferred_lft forever 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:25.919 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:25.920 192.168.100.9' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:25.920 192.168.100.9' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:25.920 192.168.100.9' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:17:25.920 run this test only with TCP transport for now 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:25.920 rmmod nvme_rdma 00:17:25.920 rmmod nvme_fabrics 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:25.920 00:17:25.920 real 0m2.638s 00:17:25.920 user 0m0.969s 00:17:25.920 sys 0m1.761s 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:25.920 14:15:53 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:25.920 ************************************ 00:17:25.920 END TEST nvmf_target_multipath 00:17:25.920 ************************************ 00:17:25.920 14:15:53 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:17:25.920 14:15:53 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:25.920 14:15:53 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:25.920 14:15:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:25.920 ************************************ 00:17:25.920 START TEST nvmf_zcopy 00:17:25.920 ************************************ 00:17:25.920 14:15:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:17:26.178 * Looking for test storage... 00:17:26.178 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.178 14:15:53 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.179 14:15:53 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:28.706 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:28.706 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:28.706 Found net devices under 0000:81:00.0: mlx_0_0 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:28.706 Found net devices under 0000:81:00.1: mlx_0_1 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:28.706 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:28.707 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:28.707 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:28.707 altname enp129s0f0np0 00:17:28.707 inet 192.168.100.8/24 scope global mlx_0_0 00:17:28.707 valid_lft forever preferred_lft forever 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:28.707 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:28.707 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:28.707 altname enp129s0f1np1 00:17:28.707 inet 192.168.100.9/24 scope global mlx_0_1 00:17:28.707 valid_lft forever preferred_lft forever 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:28.707 192.168.100.9' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:28.707 192.168.100.9' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:28.707 192.168.100.9' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=100234 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 100234 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 100234 ']' 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:28.707 14:15:55 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 [2024-07-24 14:15:55.884057] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:28.707 [2024-07-24 14:15:55.884135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.707 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.707 [2024-07-24 14:15:55.956060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.707 [2024-07-24 14:15:56.043452] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.707 [2024-07-24 14:15:56.043518] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.707 [2024-07-24 14:15:56.043544] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.707 [2024-07-24 14:15:56.043558] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.707 [2024-07-24 14:15:56.043571] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.707 [2024-07-24 14:15:56.043608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.965 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:17:28.966 Unsupported transport: rdma 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@804 -- # type=--id 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@805 -- # id=0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:28.966 nvmf_trace.0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # return 0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:28.966 rmmod nvme_rdma 00:17:28.966 rmmod nvme_fabrics 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 100234 ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 100234 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 100234 ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 100234 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100234 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100234' 00:17:28.966 killing process with pid 100234 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 100234 00:17:28.966 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 100234 00:17:29.224 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:29.224 14:15:56 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:29.224 00:17:29.224 real 0m3.247s 00:17:29.224 user 0m1.707s 00:17:29.224 sys 0m1.999s 00:17:29.224 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:29.224 14:15:56 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:29.224 ************************************ 00:17:29.224 END TEST nvmf_zcopy 00:17:29.224 ************************************ 00:17:29.224 14:15:56 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:17:29.224 14:15:56 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:29.224 14:15:56 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:29.224 14:15:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:29.224 ************************************ 00:17:29.224 START TEST nvmf_nmic 00:17:29.224 ************************************ 00:17:29.224 14:15:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:17:29.482 * Looking for test storage... 00:17:29.482 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:29.482 14:15:56 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.010 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:32.011 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:32.011 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:32.011 Found net devices under 0000:81:00.0: mlx_0_0 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:32.011 Found net devices under 0000:81:00.1: mlx_0_1 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:32.011 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:32.011 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:32.011 altname enp129s0f0np0 00:17:32.011 inet 192.168.100.8/24 scope global mlx_0_0 00:17:32.011 valid_lft forever preferred_lft forever 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:32.011 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:32.011 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:32.011 altname enp129s0f1np1 00:17:32.011 inet 192.168.100.9/24 scope global mlx_0_1 00:17:32.011 valid_lft forever preferred_lft forever 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:32.011 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:32.012 192.168.100.9' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:32.012 192.168.100.9' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:32.012 192.168.100.9' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=102313 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 102313 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 102313 ']' 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:32.012 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.012 [2024-07-24 14:15:59.182542] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:32.012 [2024-07-24 14:15:59.182630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.012 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.012 [2024-07-24 14:15:59.249535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.012 [2024-07-24 14:15:59.338300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.012 [2024-07-24 14:15:59.338379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.012 [2024-07-24 14:15:59.338392] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.012 [2024-07-24 14:15:59.338403] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.012 [2024-07-24 14:15:59.338412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.012 [2024-07-24 14:15:59.338493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.012 [2024-07-24 14:15:59.338559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.012 [2024-07-24 14:15:59.338625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.012 [2024-07-24 14:15:59.338627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.270 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.270 [2024-07-24 14:15:59.521716] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x92c9e0/0x930ed0) succeed. 00:17:32.270 [2024-07-24 14:15:59.532785] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x92dfd0/0x972560) succeed. 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 Malloc0 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 [2024-07-24 14:15:59.719900] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:32.528 test case1: single bdev can't be used in multiple subsystems 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 [2024-07-24 14:15:59.743695] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:32.528 [2024-07-24 14:15:59.743724] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:32.528 [2024-07-24 14:15:59.743738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:32.528 request: 00:17:32.528 { 00:17:32.528 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:32.528 "namespace": { 00:17:32.528 "bdev_name": "Malloc0", 00:17:32.528 "no_auto_visible": false 00:17:32.528 }, 00:17:32.528 "method": "nvmf_subsystem_add_ns", 00:17:32.528 "req_id": 1 00:17:32.528 } 00:17:32.528 Got JSON-RPC error response 00:17:32.528 response: 00:17:32.528 { 00:17:32.528 "code": -32602, 00:17:32.528 "message": "Invalid parameters" 00:17:32.528 } 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:32.528 Adding namespace failed - expected result. 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:32.528 test case2: host connect to nvmf target in multiple paths 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:32.528 [2024-07-24 14:15:59.755766] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.528 14:15:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:33.900 14:16:00 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:17:34.833 14:16:02 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.833 14:16:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:17:34.833 14:16:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.833 14:16:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:17:34.833 14:16:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:17:36.728 14:16:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:36.728 14:16:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:36.728 14:16:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.985 14:16:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:17:36.985 14:16:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.985 14:16:04 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:17:36.985 14:16:04 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:36.985 [global] 00:17:36.985 thread=1 00:17:36.985 invalidate=1 00:17:36.985 rw=write 00:17:36.985 time_based=1 00:17:36.985 runtime=1 00:17:36.985 ioengine=libaio 00:17:36.985 direct=1 00:17:36.985 bs=4096 00:17:36.985 iodepth=1 00:17:36.985 norandommap=0 00:17:36.985 numjobs=1 00:17:36.985 00:17:36.985 verify_dump=1 00:17:36.985 verify_backlog=512 00:17:36.985 verify_state_save=0 00:17:36.985 do_verify=1 00:17:36.985 verify=crc32c-intel 00:17:36.985 [job0] 00:17:36.985 filename=/dev/nvme0n1 00:17:36.985 Could not set queue depth (nvme0n1) 00:17:36.985 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.985 fio-3.35 00:17:36.985 Starting 1 thread 00:17:38.356 00:17:38.356 job0: (groupid=0, jobs=1): err= 0: pid=103080: Wed Jul 24 14:16:05 2024 00:17:38.356 read: IOPS=7168, BW=28.0MiB/s (29.4MB/s)(28.0MiB/1000msec) 00:17:38.356 slat (nsec): min=3810, max=38070, avg=4982.95, stdev=1352.09 00:17:38.356 clat (usec): min=53, max=102, avg=64.40, stdev= 7.81 00:17:38.356 lat (usec): min=57, max=110, avg=69.39, stdev= 8.02 00:17:38.356 clat percentiles (usec): 00:17:38.357 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 60], 00:17:38.357 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 62], 60.00th=[ 63], 00:17:38.357 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 84], 00:17:38.357 | 99.00th=[ 91], 99.50th=[ 93], 99.90th=[ 98], 99.95th=[ 100], 00:17:38.357 | 99.99th=[ 103] 00:17:38.357 write: IOPS=7311, BW=28.6MiB/s (29.9MB/s)(28.6MiB/1000msec); 0 zone resets 00:17:38.357 slat (nsec): min=4449, max=28642, avg=5732.63, stdev=1562.81 00:17:38.357 clat (usec): min=49, max=115, avg=60.25, stdev= 7.86 00:17:38.357 lat (usec): min=54, max=142, avg=65.99, stdev= 8.16 00:17:38.357 clat percentiles (usec): 00:17:38.357 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:17:38.357 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 60], 00:17:38.357 | 70.00th=[ 61], 80.00th=[ 63], 90.00th=[ 74], 95.00th=[ 80], 00:17:38.357 | 99.00th=[ 87], 99.50th=[ 90], 99.90th=[ 94], 99.95th=[ 98], 00:17:38.357 | 99.99th=[ 116] 00:17:38.357 bw ( KiB/s): min=29592, max=29592, per=100.00%, avg=29592.00, stdev= 0.00, samples=1 00:17:38.357 iops : min= 7398, max= 7398, avg=7398.00, stdev= 0.00, samples=1 00:17:38.357 lat (usec) : 50=0.05%, 100=99.92%, 250=0.03% 00:17:38.357 cpu : usr=5.10%, sys=10.20%, ctx=14479, majf=0, minf=2 00:17:38.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.357 issued rwts: total=7168,7311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.357 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.357 00:17:38.357 Run status group 0 (all jobs): 00:17:38.357 READ: bw=28.0MiB/s (29.4MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=28.0MiB (29.4MB), run=1000-1000msec 00:17:38.357 WRITE: bw=28.6MiB/s (29.9MB/s), 28.6MiB/s-28.6MiB/s (29.9MB/s-29.9MB/s), io=28.6MiB (29.9MB), run=1000-1000msec 00:17:38.357 00:17:38.357 Disk stats (read/write): 00:17:38.357 nvme0n1: ios=6403/6656, merge=0/0, ticks=408/369, in_queue=777, util=90.78% 00:17:38.357 14:16:05 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:40.883 rmmod nvme_rdma 00:17:40.883 rmmod nvme_fabrics 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 102313 ']' 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 102313 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 102313 ']' 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 102313 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102313 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102313' 00:17:40.883 killing process with pid 102313 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 102313 00:17:40.883 14:16:07 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 102313 00:17:40.883 14:16:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.883 14:16:08 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:40.883 00:17:40.883 real 0m11.481s 00:17:40.883 user 0m35.816s 00:17:40.883 sys 0m2.379s 00:17:40.883 14:16:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:40.883 14:16:08 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.883 ************************************ 00:17:40.883 END TEST nvmf_nmic 00:17:40.883 ************************************ 00:17:40.883 14:16:08 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:17:40.883 14:16:08 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:40.883 14:16:08 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:40.883 14:16:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:40.883 ************************************ 00:17:40.883 START TEST nvmf_fio_target 00:17:40.883 ************************************ 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:17:40.883 * Looking for test storage... 00:17:40.883 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.883 14:16:08 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.884 14:16:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.447 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:17:43.448 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:17:43.448 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:17:43.448 Found net devices under 0000:81:00.0: mlx_0_0 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:17:43.448 Found net devices under 0000:81:00.1: mlx_0_1 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:43.448 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:43.448 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:17:43.448 altname enp129s0f0np0 00:17:43.448 inet 192.168.100.8/24 scope global mlx_0_0 00:17:43.448 valid_lft forever preferred_lft forever 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:43.448 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:43.448 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:17:43.448 altname enp129s0f1np1 00:17:43.448 inet 192.168.100.9/24 scope global mlx_0_1 00:17:43.448 valid_lft forever preferred_lft forever 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:43.448 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:43.449 192.168.100.9' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:43.449 192.168.100.9' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:43.449 192.168.100.9' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=105429 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 105429 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 105429 ']' 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.449 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.449 [2024-07-24 14:16:10.629406] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:17:43.449 [2024-07-24 14:16:10.629502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.449 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.449 [2024-07-24 14:16:10.706812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.449 [2024-07-24 14:16:10.798282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.449 [2024-07-24 14:16:10.798350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.449 [2024-07-24 14:16:10.798377] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.449 [2024-07-24 14:16:10.798392] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.449 [2024-07-24 14:16:10.798404] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.449 [2024-07-24 14:16:10.798484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.449 [2024-07-24 14:16:10.798553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.449 [2024-07-24 14:16:10.798648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.449 [2024-07-24 14:16:10.798650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.708 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.708 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:17:43.708 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.708 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.708 14:16:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.708 14:16:10 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.708 14:16:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:43.966 [2024-07-24 14:16:11.190272] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcb99e0/0xcbded0) succeed. 00:17:43.966 [2024-07-24 14:16:11.201192] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcbafd0/0xcff560) succeed. 00:17:44.224 14:16:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.481 14:16:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:44.481 14:16:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.739 14:16:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:44.739 14:16:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:44.996 14:16:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:44.996 14:16:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:45.255 14:16:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:45.255 14:16:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:45.512 14:16:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:45.770 14:16:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:45.770 14:16:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:46.028 14:16:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:46.028 14:16:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:46.285 14:16:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:46.285 14:16:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:46.542 14:16:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:46.799 14:16:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:46.800 14:16:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.057 14:16:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:47.057 14:16:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.314 14:16:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:47.572 [2024-07-24 14:16:14.832885] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:47.572 14:16:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:47.830 14:16:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:48.087 14:16:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:49.457 14:16:16 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:49.457 14:16:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:17:49.457 14:16:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.457 14:16:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:17:49.457 14:16:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:17:49.457 14:16:16 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:17:51.352 14:16:18 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:51.352 14:16:18 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:51.352 14:16:18 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.352 14:16:18 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:17:51.352 14:16:18 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.352 14:16:18 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:17:51.352 14:16:18 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:51.352 [global] 00:17:51.352 thread=1 00:17:51.352 invalidate=1 00:17:51.352 rw=write 00:17:51.352 time_based=1 00:17:51.352 runtime=1 00:17:51.352 ioengine=libaio 00:17:51.352 direct=1 00:17:51.352 bs=4096 00:17:51.352 iodepth=1 00:17:51.352 norandommap=0 00:17:51.352 numjobs=1 00:17:51.352 00:17:51.352 verify_dump=1 00:17:51.352 verify_backlog=512 00:17:51.352 verify_state_save=0 00:17:51.352 do_verify=1 00:17:51.352 verify=crc32c-intel 00:17:51.352 [job0] 00:17:51.352 filename=/dev/nvme0n1 00:17:51.352 [job1] 00:17:51.352 filename=/dev/nvme0n2 00:17:51.352 [job2] 00:17:51.352 filename=/dev/nvme0n3 00:17:51.352 [job3] 00:17:51.352 filename=/dev/nvme0n4 00:17:51.352 Could not set queue depth (nvme0n1) 00:17:51.352 Could not set queue depth (nvme0n2) 00:17:51.352 Could not set queue depth (nvme0n3) 00:17:51.352 Could not set queue depth (nvme0n4) 00:17:51.352 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.352 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.352 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.352 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:51.352 fio-3.35 00:17:51.352 Starting 4 threads 00:17:52.725 00:17:52.725 job0: (groupid=0, jobs=1): err= 0: pid=106511: Wed Jul 24 14:16:19 2024 00:17:52.725 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:17:52.725 slat (nsec): min=4448, max=57121, avg=10137.72, stdev=3775.73 00:17:52.725 clat (usec): min=88, max=263, avg=138.32, stdev=24.35 00:17:52.725 lat (usec): min=102, max=271, avg=148.46, stdev=25.13 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 111], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 120], 00:17:52.725 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 139], 00:17:52.725 | 70.00th=[ 147], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 188], 00:17:52.725 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 260], 99.95th=[ 262], 00:17:52.725 | 99.99th=[ 265] 00:17:52.725 write: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1001msec); 0 zone resets 00:17:52.725 slat (nsec): min=5100, max=56918, avg=11677.17, stdev=4596.44 00:17:52.725 clat (usec): min=81, max=276, avg=136.31, stdev=27.74 00:17:52.725 lat (usec): min=105, max=289, avg=147.99, stdev=28.78 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:17:52.725 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 127], 60.00th=[ 135], 00:17:52.725 | 70.00th=[ 147], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 192], 00:17:52.725 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 262], 99.95th=[ 277], 00:17:52.725 | 99.99th=[ 277] 00:17:52.725 bw ( KiB/s): min=16352, max=16352, per=25.02%, avg=16352.00, stdev= 0.00, samples=1 00:17:52.725 iops : min= 4088, max= 4088, avg=4088.00, stdev= 0.00, samples=1 00:17:52.725 lat (usec) : 100=0.15%, 250=99.62%, 500=0.23% 00:17:52.725 cpu : usr=3.90%, sys=10.30%, ctx=6626, majf=0, minf=2 00:17:52.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 issued rwts: total=3072,3554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.725 job1: (groupid=0, jobs=1): err= 0: pid=106515: Wed Jul 24 14:16:19 2024 00:17:52.725 read: IOPS=3288, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:17:52.725 slat (nsec): min=4277, max=37382, avg=5523.60, stdev=2292.00 00:17:52.725 clat (usec): min=79, max=357, avg=141.73, stdev=29.79 00:17:52.725 lat (usec): min=84, max=363, avg=147.26, stdev=30.48 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 88], 5.00th=[ 95], 10.00th=[ 113], 20.00th=[ 122], 00:17:52.725 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 143], 00:17:52.725 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 182], 95.00th=[ 192], 00:17:52.725 | 99.00th=[ 212], 99.50th=[ 243], 99.90th=[ 314], 99.95th=[ 347], 00:17:52.725 | 99.99th=[ 359] 00:17:52.725 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:17:52.725 slat (nsec): min=5004, max=33794, avg=6987.43, stdev=3159.37 00:17:52.725 clat (usec): min=62, max=357, avg=133.82, stdev=33.67 00:17:52.725 lat (usec): min=68, max=369, avg=140.81, stdev=35.16 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 68], 5.00th=[ 74], 10.00th=[ 85], 20.00th=[ 113], 00:17:52.725 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 129], 60.00th=[ 141], 00:17:52.725 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 188], 00:17:52.725 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 229], 99.95th=[ 262], 00:17:52.725 | 99.99th=[ 359] 00:17:52.725 bw ( KiB/s): min=16384, max=16384, per=25.07%, avg=16384.00, stdev= 0.00, samples=1 00:17:52.725 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:17:52.725 lat (usec) : 100=11.46%, 250=88.28%, 500=0.26% 00:17:52.725 cpu : usr=1.80%, sys=6.70%, ctx=6876, majf=0, minf=1 00:17:52.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 issued rwts: total=3292,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.725 job2: (groupid=0, jobs=1): err= 0: pid=106516: Wed Jul 24 14:16:19 2024 00:17:52.725 read: IOPS=3989, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1001msec) 00:17:52.725 slat (nsec): min=4392, max=45010, avg=6408.55, stdev=4536.84 00:17:52.725 clat (usec): min=81, max=268, avg=119.32, stdev=35.01 00:17:52.725 lat (usec): min=86, max=274, avg=125.72, stdev=37.31 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 93], 00:17:52.725 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 112], 00:17:52.725 | 70.00th=[ 127], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 190], 00:17:52.725 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 253], 99.95th=[ 258], 00:17:52.725 | 99.99th=[ 269] 00:17:52.725 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:17:52.725 slat (nsec): min=5003, max=57985, avg=7304.00, stdev=5243.88 00:17:52.725 clat (usec): min=77, max=279, avg=110.86, stdev=32.71 00:17:52.725 lat (usec): min=82, max=287, avg=118.16, stdev=35.64 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:17:52.725 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 94], 60.00th=[ 103], 00:17:52.725 | 70.00th=[ 114], 80.00th=[ 143], 90.00th=[ 165], 95.00th=[ 180], 00:17:52.725 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 237], 99.95th=[ 255], 00:17:52.725 | 99.99th=[ 281] 00:17:52.725 bw ( KiB/s): min=20480, max=20480, per=31.34%, avg=20480.00, stdev= 0.00, samples=1 00:17:52.725 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:17:52.725 lat (usec) : 100=52.48%, 250=47.41%, 500=0.11% 00:17:52.725 cpu : usr=3.60%, sys=6.60%, ctx=8089, majf=0, minf=1 00:17:52.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 issued rwts: total=3993,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.725 job3: (groupid=0, jobs=1): err= 0: pid=106517: Wed Jul 24 14:16:19 2024 00:17:52.725 read: IOPS=4943, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1001msec) 00:17:52.725 slat (nsec): min=4387, max=24785, avg=5880.31, stdev=2364.71 00:17:52.725 clat (usec): min=75, max=174, avg=92.23, stdev=12.68 00:17:52.725 lat (usec): min=80, max=179, avg=98.11, stdev=13.16 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:17:52.725 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:17:52.725 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 117], 00:17:52.725 | 99.00th=[ 143], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 165], 00:17:52.725 | 99.99th=[ 176] 00:17:52.725 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:17:52.725 slat (nsec): min=5109, max=33986, avg=6580.63, stdev=2483.80 00:17:52.725 clat (usec): min=68, max=299, avg=90.83, stdev=19.43 00:17:52.725 lat (usec): min=74, max=306, avg=97.41, stdev=19.66 00:17:52.725 clat percentiles (usec): 00:17:52.725 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:17:52.725 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:17:52.725 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 124], 95.00th=[ 137], 00:17:52.725 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 192], 00:17:52.725 | 99.99th=[ 302] 00:17:52.725 bw ( KiB/s): min=20928, max=20928, per=32.02%, avg=20928.00, stdev= 0.00, samples=1 00:17:52.725 iops : min= 5232, max= 5232, avg=5232.00, stdev= 0.00, samples=1 00:17:52.725 lat (usec) : 100=80.31%, 250=19.68%, 500=0.01% 00:17:52.725 cpu : usr=5.70%, sys=6.70%, ctx=10068, majf=0, minf=1 00:17:52.725 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.725 issued rwts: total=4948,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.725 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.725 00:17:52.725 Run status group 0 (all jobs): 00:17:52.725 READ: bw=59.7MiB/s (62.6MB/s), 12.0MiB/s-19.3MiB/s (12.6MB/s-20.2MB/s), io=59.8MiB (62.7MB), run=1001-1001msec 00:17:52.725 WRITE: bw=63.8MiB/s (66.9MB/s), 13.9MiB/s-20.0MiB/s (14.5MB/s-20.9MB/s), io=63.9MiB (67.0MB), run=1001-1001msec 00:17:52.725 00:17:52.725 Disk stats (read/write): 00:17:52.725 nvme0n1: ios=2805/3072, merge=0/0, ticks=395/409, in_queue=804, util=85.87% 00:17:52.725 nvme0n2: ios=2971/3072, merge=0/0, ticks=414/397, in_queue=811, util=86.43% 00:17:52.725 nvme0n3: ios=3584/3712, merge=0/0, ticks=413/402, in_queue=815, util=88.78% 00:17:52.725 nvme0n4: ios=4096/4500, merge=0/0, ticks=389/414, in_queue=803, util=89.53% 00:17:52.725 14:16:19 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:52.725 [global] 00:17:52.725 thread=1 00:17:52.725 invalidate=1 00:17:52.725 rw=randwrite 00:17:52.725 time_based=1 00:17:52.725 runtime=1 00:17:52.725 ioengine=libaio 00:17:52.725 direct=1 00:17:52.725 bs=4096 00:17:52.725 iodepth=1 00:17:52.725 norandommap=0 00:17:52.725 numjobs=1 00:17:52.725 00:17:52.725 verify_dump=1 00:17:52.725 verify_backlog=512 00:17:52.726 verify_state_save=0 00:17:52.726 do_verify=1 00:17:52.726 verify=crc32c-intel 00:17:52.726 [job0] 00:17:52.726 filename=/dev/nvme0n1 00:17:52.726 [job1] 00:17:52.726 filename=/dev/nvme0n2 00:17:52.726 [job2] 00:17:52.726 filename=/dev/nvme0n3 00:17:52.726 [job3] 00:17:52.726 filename=/dev/nvme0n4 00:17:52.726 Could not set queue depth (nvme0n1) 00:17:52.726 Could not set queue depth (nvme0n2) 00:17:52.726 Could not set queue depth (nvme0n3) 00:17:52.726 Could not set queue depth (nvme0n4) 00:17:52.983 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.983 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.983 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.983 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:52.983 fio-3.35 00:17:52.983 Starting 4 threads 00:17:54.356 00:17:54.356 job0: (groupid=0, jobs=1): err= 0: pid=106861: Wed Jul 24 14:16:21 2024 00:17:54.356 read: IOPS=3362, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec) 00:17:54.356 slat (nsec): min=4364, max=16035, avg=5209.65, stdev=738.28 00:17:54.356 clat (usec): min=82, max=383, avg=141.34, stdev=13.07 00:17:54.356 lat (usec): min=87, max=388, avg=146.54, stdev=13.07 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 131], 00:17:54.356 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:17:54.356 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:17:54.356 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 206], 00:17:54.356 | 99.99th=[ 383] 00:17:54.356 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:17:54.356 slat (nsec): min=5107, max=35468, avg=6414.24, stdev=1826.46 00:17:54.356 clat (usec): min=81, max=494, avg=131.86, stdev=13.88 00:17:54.356 lat (usec): min=88, max=499, avg=138.27, stdev=14.16 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:17:54.356 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 131], 00:17:54.356 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 155], 00:17:54.356 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 196], 99.95th=[ 235], 00:17:54.356 | 99.99th=[ 494] 00:17:54.356 bw ( KiB/s): min=15488, max=15488, per=22.96%, avg=15488.00, stdev= 0.00, samples=1 00:17:54.356 iops : min= 3872, max= 3872, avg=3872.00, stdev= 0.00, samples=1 00:17:54.356 lat (usec) : 100=0.13%, 250=99.84%, 500=0.03% 00:17:54.356 cpu : usr=3.30%, sys=4.80%, ctx=6950, majf=0, minf=1 00:17:54.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 issued rwts: total=3366,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.356 job1: (groupid=0, jobs=1): err= 0: pid=106862: Wed Jul 24 14:16:21 2024 00:17:54.356 read: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1001msec) 00:17:54.356 slat (nsec): min=4789, max=32021, avg=9181.36, stdev=3873.63 00:17:54.356 clat (usec): min=58, max=487, avg=92.40, stdev=23.26 00:17:54.356 lat (usec): min=73, max=493, avg=101.58, stdev=22.06 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:17:54.356 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 86], 00:17:54.356 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 133], 95.00th=[ 143], 00:17:54.356 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 176], 00:17:54.356 | 99.99th=[ 486] 00:17:54.356 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:17:54.356 slat (nsec): min=5551, max=39363, avg=9941.24, stdev=3975.13 00:17:54.356 clat (usec): min=52, max=177, avg=83.39, stdev=18.22 00:17:54.356 lat (usec): min=67, max=206, avg=93.34, stdev=17.72 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 65], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:17:54.356 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 79], 00:17:54.356 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 117], 95.00th=[ 125], 00:17:54.356 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 172], 00:17:54.356 | 99.99th=[ 178] 00:17:54.356 bw ( KiB/s): min=21136, max=21136, per=31.34%, avg=21136.00, stdev= 0.00, samples=1 00:17:54.356 iops : min= 5284, max= 5284, avg=5284.00, stdev= 0.00, samples=1 00:17:54.356 lat (usec) : 100=81.41%, 250=18.58%, 500=0.01% 00:17:54.356 cpu : usr=6.30%, sys=11.90%, ctx=10029, majf=0, minf=1 00:17:54.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 issued rwts: total=4909,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.356 job2: (groupid=0, jobs=1): err= 0: pid=106863: Wed Jul 24 14:16:21 2024 00:17:54.356 read: IOPS=3360, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1002msec) 00:17:54.356 slat (nsec): min=4569, max=19867, avg=5426.90, stdev=747.05 00:17:54.356 clat (usec): min=94, max=268, avg=141.14, stdev=12.14 00:17:54.356 lat (usec): min=99, max=273, avg=146.57, stdev=12.13 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:17:54.356 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:17:54.356 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:17:54.356 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 198], 00:17:54.356 | 99.99th=[ 269] 00:17:54.356 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:17:54.356 slat (nsec): min=5291, max=38583, avg=6592.31, stdev=1709.93 00:17:54.356 clat (usec): min=93, max=421, avg=131.78, stdev=13.50 00:17:54.356 lat (usec): min=99, max=428, avg=138.37, stdev=13.81 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:17:54.356 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:17:54.356 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 151], 95.00th=[ 155], 00:17:54.356 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 249], 00:17:54.356 | 99.99th=[ 420] 00:17:54.356 bw ( KiB/s): min=13192, max=15480, per=21.25%, avg=14336.00, stdev=1617.86, samples=2 00:17:54.356 iops : min= 3298, max= 3870, avg=3584.00, stdev=404.47, samples=2 00:17:54.356 lat (usec) : 100=0.04%, 250=99.93%, 500=0.03% 00:17:54.356 cpu : usr=1.50%, sys=6.69%, ctx=6952, majf=0, minf=1 00:17:54.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 issued rwts: total=3367,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.356 job3: (groupid=0, jobs=1): err= 0: pid=106864: Wed Jul 24 14:16:21 2024 00:17:54.356 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1001msec) 00:17:54.356 slat (nsec): min=4520, max=17333, avg=5406.98, stdev=854.33 00:17:54.356 clat (usec): min=78, max=174, avg=105.57, stdev=18.14 00:17:54.356 lat (usec): min=83, max=179, avg=110.97, stdev=18.24 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 92], 00:17:54.356 | 30.00th=[ 94], 40.00th=[ 95], 50.00th=[ 98], 60.00th=[ 103], 00:17:54.356 | 70.00th=[ 112], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 143], 00:17:54.356 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 167], 99.95th=[ 169], 00:17:54.356 | 99.99th=[ 176] 00:17:54.356 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:17:54.356 slat (nsec): min=5068, max=38564, avg=6280.41, stdev=1705.57 00:17:54.356 clat (usec): min=72, max=176, avg=97.76, stdev=15.92 00:17:54.356 lat (usec): min=78, max=198, avg=104.04, stdev=16.28 00:17:54.356 clat percentiles (usec): 00:17:54.356 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:17:54.356 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 95], 00:17:54.356 | 70.00th=[ 102], 80.00th=[ 111], 90.00th=[ 122], 95.00th=[ 129], 00:17:54.356 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 172], 00:17:54.356 | 99.99th=[ 178] 00:17:54.356 bw ( KiB/s): min=20480, max=20480, per=30.36%, avg=20480.00, stdev= 0.00, samples=1 00:17:54.356 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:17:54.356 lat (usec) : 100=61.59%, 250=38.41% 00:17:54.356 cpu : usr=4.60%, sys=5.80%, ctx=9179, majf=0, minf=2 00:17:54.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:54.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.356 issued rwts: total=4571,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:54.356 00:17:54.356 Run status group 0 (all jobs): 00:17:54.356 READ: bw=63.2MiB/s (66.3MB/s), 13.1MiB/s-19.2MiB/s (13.8MB/s-20.1MB/s), io=63.3MiB (66.4MB), run=1001-1002msec 00:17:54.356 WRITE: bw=65.9MiB/s (69.1MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1001-1002msec 00:17:54.356 00:17:54.356 Disk stats (read/write): 00:17:54.356 nvme0n1: ios=2610/3064, merge=0/0, ticks=389/413, in_queue=802, util=82.46% 00:17:54.356 nvme0n2: ios=4097/4608, merge=0/0, ticks=356/368, in_queue=724, util=83.57% 00:17:54.356 nvme0n3: ios=2560/3065, merge=0/0, ticks=372/416, in_queue=788, util=87.81% 00:17:54.356 nvme0n4: ios=3760/4096, merge=0/0, ticks=381/389, in_queue=770, util=89.25% 00:17:54.356 14:16:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:54.357 [global] 00:17:54.357 thread=1 00:17:54.357 invalidate=1 00:17:54.357 rw=write 00:17:54.357 time_based=1 00:17:54.357 runtime=1 00:17:54.357 ioengine=libaio 00:17:54.357 direct=1 00:17:54.357 bs=4096 00:17:54.357 iodepth=128 00:17:54.357 norandommap=0 00:17:54.357 numjobs=1 00:17:54.357 00:17:54.357 verify_dump=1 00:17:54.357 verify_backlog=512 00:17:54.357 verify_state_save=0 00:17:54.357 do_verify=1 00:17:54.357 verify=crc32c-intel 00:17:54.357 [job0] 00:17:54.357 filename=/dev/nvme0n1 00:17:54.357 [job1] 00:17:54.357 filename=/dev/nvme0n2 00:17:54.357 [job2] 00:17:54.357 filename=/dev/nvme0n3 00:17:54.357 [job3] 00:17:54.357 filename=/dev/nvme0n4 00:17:54.357 Could not set queue depth (nvme0n1) 00:17:54.357 Could not set queue depth (nvme0n2) 00:17:54.357 Could not set queue depth (nvme0n3) 00:17:54.357 Could not set queue depth (nvme0n4) 00:17:54.357 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.357 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.357 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.357 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:54.357 fio-3.35 00:17:54.357 Starting 4 threads 00:17:55.732 00:17:55.732 job0: (groupid=0, jobs=1): err= 0: pid=107091: Wed Jul 24 14:16:22 2024 00:17:55.732 read: IOPS=6350, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1003msec) 00:17:55.732 slat (usec): min=2, max=2751, avg=76.36, stdev=285.52 00:17:55.732 clat (usec): min=1178, max=17649, avg=9735.21, stdev=3538.39 00:17:55.732 lat (usec): min=3567, max=17653, avg=9811.57, stdev=3560.21 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 6652], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7308], 00:17:55.732 | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 8029], 00:17:55.732 | 70.00th=[ 8586], 80.00th=[14353], 90.00th=[15139], 95.00th=[16909], 00:17:55.732 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17433], 99.95th=[17695], 00:17:55.732 | 99.99th=[17695] 00:17:55.732 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:17:55.732 slat (usec): min=3, max=2672, avg=72.81, stdev=270.19 00:17:55.732 clat (usec): min=6198, max=17342, avg=9763.45, stdev=3747.57 00:17:55.732 lat (usec): min=6209, max=17346, avg=9836.26, stdev=3769.90 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 6456], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 6915], 00:17:55.732 | 30.00th=[ 7046], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7701], 00:17:55.732 | 70.00th=[13435], 80.00th=[14615], 90.00th=[15270], 95.00th=[16581], 00:17:55.732 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:17:55.732 | 99.99th=[17433] 00:17:55.732 bw ( KiB/s): min=24576, max=28672, per=27.95%, avg=26624.00, stdev=2896.31, samples=2 00:17:55.732 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:17:55.732 lat (msec) : 2=0.01%, 4=0.08%, 10=68.07%, 20=31.84% 00:17:55.732 cpu : usr=4.69%, sys=6.59%, ctx=1000, majf=0, minf=7 00:17:55.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:55.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.732 issued rwts: total=6370,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.732 job1: (groupid=0, jobs=1): err= 0: pid=107092: Wed Jul 24 14:16:22 2024 00:17:55.732 read: IOPS=4867, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec) 00:17:55.732 slat (usec): min=2, max=4122, avg=102.05, stdev=452.56 00:17:55.732 clat (usec): min=604, max=17620, avg=13204.77, stdev=4043.31 00:17:55.732 lat (usec): min=607, max=17629, avg=13306.82, stdev=4050.29 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 3425], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7701], 00:17:55.732 | 30.00th=[ 8979], 40.00th=[14091], 50.00th=[14746], 60.00th=[15139], 00:17:55.732 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:17:55.732 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:17:55.732 | 99.99th=[17695] 00:17:55.732 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:17:55.732 slat (usec): min=3, max=4062, avg=92.47, stdev=415.14 00:17:55.732 clat (usec): min=6437, max=17067, avg=12186.73, stdev=4072.25 00:17:55.732 lat (usec): min=6443, max=17074, avg=12279.21, stdev=4084.69 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 6587], 5.00th=[ 6849], 10.00th=[ 6915], 20.00th=[ 7111], 00:17:55.732 | 30.00th=[ 7570], 40.00th=[12256], 50.00th=[14222], 60.00th=[14484], 00:17:55.732 | 70.00th=[15795], 80.00th=[16319], 90.00th=[16581], 95.00th=[16712], 00:17:55.732 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[17171], 00:17:55.732 | 99.99th=[17171] 00:17:55.732 bw ( KiB/s): min=17792, max=17792, per=18.68%, avg=17792.00, stdev= 0.00, samples=1 00:17:55.732 iops : min= 4448, max= 4448, avg=4448.00, stdev= 0.00, samples=1 00:17:55.732 lat (usec) : 750=0.04% 00:17:55.732 lat (msec) : 2=0.14%, 4=0.32%, 10=34.33%, 20=65.17% 00:17:55.732 cpu : usr=3.50%, sys=5.30%, ctx=937, majf=0, minf=17 00:17:55.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:55.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.732 issued rwts: total=4872,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.732 job2: (groupid=0, jobs=1): err= 0: pid=107093: Wed Jul 24 14:16:22 2024 00:17:55.732 read: IOPS=6991, BW=27.3MiB/s (28.6MB/s)(27.4MiB/1002msec) 00:17:55.732 slat (usec): min=3, max=1433, avg=70.36, stdev=255.34 00:17:55.732 clat (usec): min=1276, max=10541, avg=9161.96, stdev=659.26 00:17:55.732 lat (usec): min=2233, max=10545, avg=9232.32, stdev=609.84 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 6783], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 8979], 00:17:55.732 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9241], 60.00th=[ 9372], 00:17:55.732 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9634], 95.00th=[ 9634], 00:17:55.732 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[10552], 99.95th=[10552], 00:17:55.732 | 99.99th=[10552] 00:17:55.732 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:17:55.732 slat (usec): min=3, max=2494, avg=66.07, stdev=236.21 00:17:55.732 clat (usec): min=7258, max=15186, avg=8710.17, stdev=652.29 00:17:55.732 lat (usec): min=8113, max=15192, avg=8776.24, stdev=612.79 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 7504], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 8455], 00:17:55.732 | 30.00th=[ 8586], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8717], 00:17:55.732 | 70.00th=[ 8848], 80.00th=[ 8848], 90.00th=[ 8979], 95.00th=[ 9241], 00:17:55.732 | 99.00th=[12387], 99.50th=[13829], 99.90th=[15139], 99.95th=[15139], 00:17:55.732 | 99.99th=[15139] 00:17:55.732 bw ( KiB/s): min=28672, max=28672, per=30.10%, avg=28672.00, stdev= 0.00, samples=2 00:17:55.732 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:17:55.732 lat (msec) : 2=0.01%, 4=0.25%, 10=98.83%, 20=0.92% 00:17:55.732 cpu : usr=4.20%, sys=7.99%, ctx=892, majf=0, minf=15 00:17:55.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:55.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.732 issued rwts: total=7005,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.732 job3: (groupid=0, jobs=1): err= 0: pid=107094: Wed Jul 24 14:16:22 2024 00:17:55.732 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:55.732 slat (usec): min=2, max=3727, avg=102.11, stdev=412.93 00:17:55.732 clat (usec): min=8014, max=18501, avg=13333.84, stdev=3754.23 00:17:55.732 lat (usec): min=8020, max=18516, avg=13435.95, stdev=3766.57 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 8225], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 9110], 00:17:55.732 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[15795], 60.00th=[16581], 00:17:55.732 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:17:55.732 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17695], 99.95th=[17695], 00:17:55.732 | 99.99th=[18482] 00:17:55.732 write: IOPS=4923, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1003msec); 0 zone resets 00:17:55.732 slat (usec): min=3, max=3705, avg=102.00, stdev=379.59 00:17:55.732 clat (usec): min=1201, max=18640, avg=13212.03, stdev=3659.50 00:17:55.732 lat (usec): min=3633, max=19058, avg=13314.03, stdev=3669.83 00:17:55.732 clat percentiles (usec): 00:17:55.732 | 1.00th=[ 7898], 5.00th=[ 8291], 10.00th=[ 8455], 20.00th=[ 8717], 00:17:55.733 | 30.00th=[ 9372], 40.00th=[12387], 50.00th=[15270], 60.00th=[16057], 00:17:55.733 | 70.00th=[16450], 80.00th=[16581], 90.00th=[16909], 95.00th=[17171], 00:17:55.733 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[18482], 00:17:55.733 | 99.99th=[18744] 00:17:55.733 bw ( KiB/s): min=18000, max=20480, per=20.20%, avg=19240.00, stdev=1753.62, samples=2 00:17:55.733 iops : min= 4500, max= 5120, avg=4810.00, stdev=438.41, samples=2 00:17:55.733 lat (msec) : 2=0.01%, 4=0.07%, 10=39.03%, 20=60.88% 00:17:55.733 cpu : usr=2.99%, sys=6.39%, ctx=826, majf=0, minf=11 00:17:55.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:55.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.733 issued rwts: total=4608,4938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.733 00:17:55.733 Run status group 0 (all jobs): 00:17:55.733 READ: bw=89.0MiB/s (93.3MB/s), 17.9MiB/s-27.3MiB/s (18.8MB/s-28.6MB/s), io=89.3MiB (93.6MB), run=1001-1003msec 00:17:55.733 WRITE: bw=93.0MiB/s (97.5MB/s), 19.2MiB/s-27.9MiB/s (20.2MB/s-29.3MB/s), io=93.3MiB (97.8MB), run=1001-1003msec 00:17:55.733 00:17:55.733 Disk stats (read/write): 00:17:55.733 nvme0n1: ios=5779/6144, merge=0/0, ticks=12683/13276, in_queue=25959, util=86.17% 00:17:55.733 nvme0n2: ios=3584/3858, merge=0/0, ticks=13543/12820, in_queue=26363, util=86.59% 00:17:55.733 nvme0n3: ios=5903/6144, merge=0/0, ticks=17364/16929, in_queue=34293, util=88.95% 00:17:55.733 nvme0n4: ios=4096/4297, merge=0/0, ticks=13032/13454, in_queue=26486, util=89.60% 00:17:55.733 14:16:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:55.733 [global] 00:17:55.733 thread=1 00:17:55.733 invalidate=1 00:17:55.733 rw=randwrite 00:17:55.733 time_based=1 00:17:55.733 runtime=1 00:17:55.733 ioengine=libaio 00:17:55.733 direct=1 00:17:55.733 bs=4096 00:17:55.733 iodepth=128 00:17:55.733 norandommap=0 00:17:55.733 numjobs=1 00:17:55.733 00:17:55.733 verify_dump=1 00:17:55.733 verify_backlog=512 00:17:55.733 verify_state_save=0 00:17:55.733 do_verify=1 00:17:55.733 verify=crc32c-intel 00:17:55.733 [job0] 00:17:55.733 filename=/dev/nvme0n1 00:17:55.733 [job1] 00:17:55.733 filename=/dev/nvme0n2 00:17:55.733 [job2] 00:17:55.733 filename=/dev/nvme0n3 00:17:55.733 [job3] 00:17:55.733 filename=/dev/nvme0n4 00:17:55.733 Could not set queue depth (nvme0n1) 00:17:55.733 Could not set queue depth (nvme0n2) 00:17:55.733 Could not set queue depth (nvme0n3) 00:17:55.733 Could not set queue depth (nvme0n4) 00:17:55.990 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:55.990 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:55.990 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:55.990 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:55.990 fio-3.35 00:17:55.990 Starting 4 threads 00:17:57.362 00:17:57.362 job0: (groupid=0, jobs=1): err= 0: pid=107328: Wed Jul 24 14:16:24 2024 00:17:57.362 read: IOPS=8695, BW=34.0MiB/s (35.6MB/s)(34.0MiB/1001msec) 00:17:57.362 slat (usec): min=2, max=1103, avg=55.07, stdev=191.35 00:17:57.362 clat (usec): min=6057, max=8619, avg=7298.52, stdev=342.96 00:17:57.362 lat (usec): min=6082, max=8636, avg=7353.59, stdev=353.94 00:17:57.362 clat percentiles (usec): 00:17:57.362 | 1.00th=[ 6456], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7046], 00:17:57.362 | 30.00th=[ 7177], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7373], 00:17:57.362 | 70.00th=[ 7439], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7963], 00:17:57.362 | 99.00th=[ 8291], 99.50th=[ 8356], 99.90th=[ 8455], 99.95th=[ 8586], 00:17:57.362 | 99.99th=[ 8586] 00:17:57.362 write: IOPS=9164, BW=35.8MiB/s (37.5MB/s)(35.8MiB/1001msec); 0 zone resets 00:17:57.362 slat (usec): min=3, max=1146, avg=51.82, stdev=176.31 00:17:57.362 clat (usec): min=628, max=8283, avg=6894.05, stdev=520.39 00:17:57.362 lat (usec): min=1290, max=8295, avg=6945.87, stdev=526.30 00:17:57.362 clat percentiles (usec): 00:17:57.362 | 1.00th=[ 5342], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:17:57.362 | 30.00th=[ 6718], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:17:57.362 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7373], 95.00th=[ 7504], 00:17:57.362 | 99.00th=[ 7832], 99.50th=[ 7898], 99.90th=[ 8094], 99.95th=[ 8160], 00:17:57.362 | 99.99th=[ 8291] 00:17:57.362 bw ( KiB/s): min=36864, max=36864, per=37.71%, avg=36864.00, stdev= 0.00, samples=1 00:17:57.362 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=1 00:17:57.362 lat (usec) : 750=0.01% 00:17:57.362 lat (msec) : 2=0.17%, 4=0.18%, 10=99.65% 00:17:57.362 cpu : usr=5.20%, sys=10.40%, ctx=1208, majf=0, minf=9 00:17:57.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:57.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.363 issued rwts: total=8704,9174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.363 job1: (groupid=0, jobs=1): err= 0: pid=107329: Wed Jul 24 14:16:24 2024 00:17:57.363 read: IOPS=3149, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1004msec) 00:17:57.363 slat (usec): min=3, max=3246, avg=147.80, stdev=425.60 00:17:57.363 clat (usec): min=1928, max=21755, avg=18887.46, stdev=1989.12 00:17:57.363 lat (usec): min=3814, max=21772, avg=19035.26, stdev=1971.10 00:17:57.363 clat percentiles (usec): 00:17:57.363 | 1.00th=[ 6063], 5.00th=[17171], 10.00th=[17957], 20.00th=[18482], 00:17:57.363 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19268], 60.00th=[19268], 00:17:57.363 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20055], 95.00th=[20317], 00:17:57.363 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21627], 00:17:57.363 | 99.99th=[21627] 00:17:57.363 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:17:57.363 slat (usec): min=3, max=3697, avg=143.45, stdev=406.11 00:17:57.363 clat (usec): min=15253, max=20789, avg=18655.98, stdev=783.36 00:17:57.363 lat (usec): min=15275, max=21387, avg=18799.43, stdev=742.69 00:17:57.363 clat percentiles (usec): 00:17:57.363 | 1.00th=[16450], 5.00th=[17433], 10.00th=[17695], 20.00th=[17957], 00:17:57.363 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:17:57.363 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19530], 95.00th=[19792], 00:17:57.363 | 99.00th=[20317], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:17:57.363 | 99.99th=[20841] 00:17:57.363 bw ( KiB/s): min=13800, max=14568, per=14.51%, avg=14184.00, stdev=543.06, samples=2 00:17:57.363 iops : min= 3450, max= 3642, avg=3546.00, stdev=135.76, samples=2 00:17:57.363 lat (msec) : 2=0.01%, 4=0.24%, 10=0.50%, 20=91.92%, 50=7.32% 00:17:57.363 cpu : usr=2.59%, sys=4.99%, ctx=818, majf=0, minf=21 00:17:57.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:57.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.363 issued rwts: total=3162,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.363 job2: (groupid=0, jobs=1): err= 0: pid=107330: Wed Jul 24 14:16:24 2024 00:17:57.363 read: IOPS=7053, BW=27.6MiB/s (28.9MB/s)(27.7MiB/1004msec) 00:17:57.363 slat (usec): min=3, max=1329, avg=69.65, stdev=249.31 00:17:57.363 clat (usec): min=2373, max=11697, avg=9179.67, stdev=521.28 00:17:57.363 lat (usec): min=3406, max=11707, avg=9249.32, stdev=502.87 00:17:57.363 clat percentiles (usec): 00:17:57.363 | 1.00th=[ 7832], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9110], 00:17:57.363 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9241], 60.00th=[ 9372], 00:17:57.363 | 70.00th=[ 9372], 80.00th=[ 9372], 90.00th=[ 9503], 95.00th=[ 9634], 00:17:57.363 | 99.00th=[10028], 99.50th=[10028], 99.90th=[11600], 99.95th=[11731], 00:17:57.363 | 99.99th=[11731] 00:17:57.363 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:17:57.363 slat (usec): min=3, max=1838, avg=65.54, stdev=232.60 00:17:57.363 clat (usec): min=6346, max=9914, avg=8662.92, stdev=370.09 00:17:57.363 lat (usec): min=6353, max=10614, avg=8728.46, stdev=350.07 00:17:57.363 clat percentiles (usec): 00:17:57.363 | 1.00th=[ 7504], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 8455], 00:17:57.363 | 30.00th=[ 8586], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8717], 00:17:57.363 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 8979], 95.00th=[ 9110], 00:17:57.363 | 99.00th=[ 9634], 99.50th=[ 9634], 99.90th=[ 9896], 99.95th=[ 9896], 00:17:57.363 | 99.99th=[ 9896] 00:17:57.363 bw ( KiB/s): min=28672, max=28672, per=29.33%, avg=28672.00, stdev= 0.00, samples=2 00:17:57.363 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:17:57.363 lat (msec) : 4=0.08%, 10=99.47%, 20=0.46% 00:17:57.363 cpu : usr=4.19%, sys=8.87%, ctx=891, majf=0, minf=11 00:17:57.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:57.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.363 issued rwts: total=7082,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.363 job3: (groupid=0, jobs=1): err= 0: pid=107331: Wed Jul 24 14:16:24 2024 00:17:57.363 read: IOPS=4358, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec) 00:17:57.363 slat (usec): min=3, max=2298, avg=111.50, stdev=330.52 00:17:57.363 clat (usec): min=2737, max=19415, avg=14387.27, stdev=1098.61 00:17:57.363 lat (usec): min=4753, max=19418, avg=14498.77, stdev=1092.36 00:17:57.363 clat percentiles (usec): 00:17:57.363 | 1.00th=[10028], 5.00th=[13304], 10.00th=[13566], 20.00th=[13960], 00:17:57.363 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:17:57.363 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:17:57.363 | 99.00th=[16319], 99.50th=[16450], 99.90th=[18482], 99.95th=[18482], 00:17:57.363 | 99.99th=[19530] 00:17:57.363 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:17:57.363 slat (usec): min=3, max=2748, avg=105.75, stdev=312.02 00:17:57.363 clat (usec): min=11298, max=16477, avg=13898.56, stdev=604.30 00:17:57.363 lat (usec): min=11315, max=16492, avg=14004.30, stdev=597.93 00:17:57.363 clat percentiles (usec): 00:17:57.363 | 1.00th=[12387], 5.00th=[12911], 10.00th=[13042], 20.00th=[13435], 00:17:57.363 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:17:57.363 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14615], 95.00th=[14877], 00:17:57.363 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15926], 99.95th=[15926], 00:17:57.363 | 99.99th=[16450] 00:17:57.363 bw ( KiB/s): min=17984, max=18880, per=18.86%, avg=18432.00, stdev=633.57, samples=2 00:17:57.363 iops : min= 4496, max= 4720, avg=4608.00, stdev=158.39, samples=2 00:17:57.363 lat (msec) : 4=0.01%, 10=0.48%, 20=99.51% 00:17:57.363 cpu : usr=2.99%, sys=6.48%, ctx=916, majf=0, minf=11 00:17:57.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:57.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.363 issued rwts: total=4376,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.363 00:17:57.363 Run status group 0 (all jobs): 00:17:57.363 READ: bw=90.7MiB/s (95.2MB/s), 12.3MiB/s-34.0MiB/s (12.9MB/s-35.6MB/s), io=91.1MiB (95.5MB), run=1001-1004msec 00:17:57.363 WRITE: bw=95.5MiB/s (100MB/s), 13.9MiB/s-35.8MiB/s (14.6MB/s-37.5MB/s), io=95.8MiB (100MB), run=1001-1004msec 00:17:57.363 00:17:57.363 Disk stats (read/write): 00:17:57.363 nvme0n1: ios=7495/7680, merge=0/0, ticks=12890/12422, in_queue=25312, util=85.87% 00:17:57.363 nvme0n2: ios=2603/3072, merge=0/0, ticks=12274/14192, in_queue=26466, util=86.54% 00:17:57.363 nvme0n3: ios=5899/6144, merge=0/0, ticks=26222/25685, in_queue=51907, util=88.81% 00:17:57.363 nvme0n4: ios=3584/4013, merge=0/0, ticks=16647/17867, in_queue=34514, util=89.56% 00:17:57.363 14:16:24 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:57.363 14:16:24 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=107467 00:17:57.363 14:16:24 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:57.363 14:16:24 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:57.363 [global] 00:17:57.363 thread=1 00:17:57.363 invalidate=1 00:17:57.363 rw=read 00:17:57.363 time_based=1 00:17:57.363 runtime=10 00:17:57.363 ioengine=libaio 00:17:57.363 direct=1 00:17:57.363 bs=4096 00:17:57.363 iodepth=1 00:17:57.363 norandommap=1 00:17:57.363 numjobs=1 00:17:57.363 00:17:57.363 [job0] 00:17:57.363 filename=/dev/nvme0n1 00:17:57.363 [job1] 00:17:57.363 filename=/dev/nvme0n2 00:17:57.363 [job2] 00:17:57.363 filename=/dev/nvme0n3 00:17:57.363 [job3] 00:17:57.363 filename=/dev/nvme0n4 00:17:57.363 Could not set queue depth (nvme0n1) 00:17:57.363 Could not set queue depth (nvme0n2) 00:17:57.363 Could not set queue depth (nvme0n3) 00:17:57.363 Could not set queue depth (nvme0n4) 00:17:57.363 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.363 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.363 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.363 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.363 fio-3.35 00:17:57.363 Starting 4 threads 00:18:00.679 14:16:27 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:00.679 14:16:27 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:00.679 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=89448448, buflen=4096 00:18:00.679 fio: pid=107679, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:00.679 14:16:27 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:00.679 14:16:27 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:00.679 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=90439680, buflen=4096 00:18:00.679 fio: pid=107678, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:00.936 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=43360256, buflen=4096 00:18:00.936 fio: pid=107626, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:00.936 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:00.936 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:01.194 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=43122688, buflen=4096 00:18:01.194 fio: pid=107644, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:01.194 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.194 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:01.194 00:18:01.194 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=107626: Wed Jul 24 14:16:28 2024 00:18:01.195 read: IOPS=7897, BW=30.8MiB/s (32.3MB/s)(105MiB/3415msec) 00:18:01.195 slat (usec): min=4, max=19033, avg= 8.04, stdev=157.61 00:18:01.195 clat (usec): min=58, max=28831, avg=116.91, stdev=230.67 00:18:01.195 lat (usec): min=63, max=28838, avg=124.95, stdev=279.24 00:18:01.195 clat percentiles (usec): 00:18:01.195 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 77], 00:18:01.195 | 30.00th=[ 86], 40.00th=[ 102], 50.00th=[ 122], 60.00th=[ 128], 00:18:01.195 | 70.00th=[ 133], 80.00th=[ 143], 90.00th=[ 159], 95.00th=[ 174], 00:18:01.195 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 212], 99.95th=[ 217], 00:18:01.195 | 99.99th=[ 506] 00:18:01.195 bw ( KiB/s): min=21952, max=36744, per=30.25%, avg=32057.33, stdev=5721.00, samples=6 00:18:01.195 iops : min= 5488, max= 9186, avg=8014.33, stdev=1430.25, samples=6 00:18:01.195 lat (usec) : 100=38.60%, 250=61.38%, 500=0.01%, 750=0.01% 00:18:01.195 lat (msec) : 50=0.01% 00:18:01.195 cpu : usr=2.23%, sys=6.71%, ctx=26979, majf=0, minf=1 00:18:01.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 issued rwts: total=26971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.195 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=107644: Wed Jul 24 14:16:28 2024 00:18:01.195 read: IOPS=7289, BW=28.5MiB/s (29.9MB/s)(105MiB/3692msec) 00:18:01.195 slat (usec): min=3, max=16735, avg= 7.89, stdev=166.70 00:18:01.195 clat (usec): min=53, max=26010, avg=127.80, stdev=203.44 00:18:01.195 lat (usec): min=59, max=26029, avg=135.69, stdev=262.92 00:18:01.195 clat percentiles (usec): 00:18:01.195 | 1.00th=[ 60], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 81], 00:18:01.195 | 30.00th=[ 102], 40.00th=[ 123], 50.00th=[ 129], 60.00th=[ 133], 00:18:01.195 | 70.00th=[ 143], 80.00th=[ 157], 90.00th=[ 182], 95.00th=[ 198], 00:18:01.195 | 99.00th=[ 225], 99.50th=[ 260], 99.90th=[ 326], 99.95th=[ 371], 00:18:01.195 | 99.99th=[ 9765] 00:18:01.195 bw ( KiB/s): min=22344, max=39167, per=26.77%, avg=28370.14, stdev=5654.59, samples=7 00:18:01.195 iops : min= 5586, max= 9791, avg=7092.43, stdev=1413.41, samples=7 00:18:01.195 lat (usec) : 100=28.54%, 250=70.89%, 500=0.55% 00:18:01.195 lat (msec) : 10=0.01%, 20=0.01%, 50=0.01% 00:18:01.195 cpu : usr=1.68%, sys=6.20%, ctx=26926, majf=0, minf=1 00:18:01.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 issued rwts: total=26913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.195 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=107678: Wed Jul 24 14:16:28 2024 00:18:01.195 read: IOPS=6952, BW=27.2MiB/s (28.5MB/s)(86.2MiB/3176msec) 00:18:01.195 slat (usec): min=4, max=15774, avg= 9.01, stdev=132.96 00:18:01.195 clat (usec): min=76, max=21035, avg=132.02, stdev=145.03 00:18:01.195 lat (usec): min=82, max=21048, avg=141.03, stdev=196.52 00:18:01.195 clat percentiles (usec): 00:18:01.195 | 1.00th=[ 85], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 97], 00:18:01.195 | 30.00th=[ 105], 40.00th=[ 118], 50.00th=[ 127], 60.00th=[ 133], 00:18:01.195 | 70.00th=[ 143], 80.00th=[ 163], 90.00th=[ 184], 95.00th=[ 200], 00:18:01.195 | 99.00th=[ 223], 99.50th=[ 241], 99.90th=[ 273], 99.95th=[ 277], 00:18:01.195 | 99.99th=[ 306] 00:18:01.195 bw ( KiB/s): min=22736, max=33448, per=26.49%, avg=28073.33, stdev=4942.23, samples=6 00:18:01.195 iops : min= 5684, max= 8362, avg=7018.33, stdev=1235.56, samples=6 00:18:01.195 lat (usec) : 100=24.84%, 250=74.81%, 500=0.34% 00:18:01.195 lat (msec) : 50=0.01% 00:18:01.195 cpu : usr=2.49%, sys=7.24%, ctx=22084, majf=0, minf=1 00:18:01.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 issued rwts: total=22081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.195 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=107679: Wed Jul 24 14:16:28 2024 00:18:01.195 read: IOPS=7494, BW=29.3MiB/s (30.7MB/s)(85.3MiB/2914msec) 00:18:01.195 slat (nsec): min=3965, max=44193, avg=6875.88, stdev=3140.47 00:18:01.195 clat (usec): min=76, max=303, avg=123.54, stdev=35.56 00:18:01.195 lat (usec): min=81, max=323, avg=130.42, stdev=35.48 00:18:01.195 clat percentiles (usec): 00:18:01.195 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:18:01.195 | 30.00th=[ 95], 40.00th=[ 104], 50.00th=[ 124], 60.00th=[ 131], 00:18:01.195 | 70.00th=[ 137], 80.00th=[ 151], 90.00th=[ 178], 95.00th=[ 194], 00:18:01.195 | 99.00th=[ 223], 99.50th=[ 243], 99.90th=[ 269], 99.95th=[ 277], 00:18:01.195 | 99.99th=[ 289] 00:18:01.195 bw ( KiB/s): min=24432, max=36592, per=27.62%, avg=29265.60, stdev=5190.41, samples=5 00:18:01.195 iops : min= 6108, max= 9148, avg=7316.40, stdev=1297.60, samples=5 00:18:01.195 lat (usec) : 100=36.76%, 250=62.81%, 500=0.44% 00:18:01.195 cpu : usr=2.37%, sys=7.69%, ctx=21839, majf=0, minf=1 00:18:01.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.195 issued rwts: total=21839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.195 00:18:01.195 Run status group 0 (all jobs): 00:18:01.195 READ: bw=103MiB/s (109MB/s), 27.2MiB/s-30.8MiB/s (28.5MB/s-32.3MB/s), io=382MiB (401MB), run=2914-3692msec 00:18:01.195 00:18:01.195 Disk stats (read/write): 00:18:01.195 nvme0n1: ios=26466/0, merge=0/0, ticks=3122/0, in_queue=3122, util=94.51% 00:18:01.195 nvme0n2: ios=25764/0, merge=0/0, ticks=3377/0, in_queue=3377, util=94.88% 00:18:01.195 nvme0n3: ios=21760/0, merge=0/0, ticks=2910/0, in_queue=2910, util=95.91% 00:18:01.195 nvme0n4: ios=21544/0, merge=0/0, ticks=2714/0, in_queue=2714, util=96.71% 00:18:01.453 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.453 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:01.711 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.711 14:16:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:01.969 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:01.969 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:02.227 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:02.227 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:02.485 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:02.485 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 107467 00:18:02.485 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:02.485 14:16:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:03.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.417 14:16:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:03.417 14:16:30 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:03.417 14:16:30 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:03.417 14:16:30 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.417 14:16:30 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:03.417 14:16:30 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.674 14:16:30 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:03.674 14:16:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:03.674 14:16:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:03.674 nvmf hotplug test: fio failed as expected 00:18:03.674 14:16:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.675 14:16:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:03.675 14:16:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:03.933 rmmod nvme_rdma 00:18:03.933 rmmod nvme_fabrics 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 105429 ']' 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 105429 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 105429 ']' 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 105429 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105429 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105429' 00:18:03.933 killing process with pid 105429 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 105429 00:18:03.933 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 105429 00:18:04.191 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.191 14:16:31 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:04.191 00:18:04.191 real 0m23.325s 00:18:04.191 user 1m31.921s 00:18:04.191 sys 0m5.544s 00:18:04.191 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:04.191 14:16:31 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.191 ************************************ 00:18:04.191 END TEST nvmf_fio_target 00:18:04.191 ************************************ 00:18:04.191 14:16:31 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:18:04.191 14:16:31 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:04.191 14:16:31 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:04.191 14:16:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:04.191 ************************************ 00:18:04.191 START TEST nvmf_bdevio 00:18:04.191 ************************************ 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:18:04.191 * Looking for test storage... 00:18:04.191 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.191 14:16:31 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:04.192 14:16:31 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.732 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:18:06.733 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:18:06.733 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:18:06.733 Found net devices under 0000:81:00.0: mlx_0_0 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:18:06.733 Found net devices under 0000:81:00.1: mlx_0_1 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:06.733 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:06.733 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:18:06.733 altname enp129s0f0np0 00:18:06.733 inet 192.168.100.8/24 scope global mlx_0_0 00:18:06.733 valid_lft forever preferred_lft forever 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:06.733 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:06.733 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:18:06.733 altname enp129s0f1np1 00:18:06.733 inet 192.168.100.9/24 scope global mlx_0_1 00:18:06.733 valid_lft forever preferred_lft forever 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:06.733 14:16:33 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:06.733 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:06.734 192.168.100.9' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:06.734 192.168.100.9' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:06.734 192.168.100.9' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=110446 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 110446 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 110446 ']' 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:06.734 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:06.734 [2024-07-24 14:16:34.081449] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:06.734 [2024-07-24 14:16:34.081516] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.992 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.992 [2024-07-24 14:16:34.152506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.992 [2024-07-24 14:16:34.244059] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.992 [2024-07-24 14:16:34.244120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.992 [2024-07-24 14:16:34.244147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.992 [2024-07-24 14:16:34.244160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.992 [2024-07-24 14:16:34.244173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.992 [2024-07-24 14:16:34.244266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:06.992 [2024-07-24 14:16:34.244321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:06.992 [2024-07-24 14:16:34.244373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:06.992 [2024-07-24 14:16:34.244375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.992 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:06.992 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:06.992 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.992 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.992 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:07.249 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:07.250 [2024-07-24 14:16:34.415429] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbdb2c0/0xbdf7b0) succeed. 00:18:07.250 [2024-07-24 14:16:34.426419] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbdc8b0/0xc20e40) succeed. 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:07.250 Malloc0 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.250 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:07.250 [2024-07-24 14:16:34.619110] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:07.508 { 00:18:07.508 "params": { 00:18:07.508 "name": "Nvme$subsystem", 00:18:07.508 "trtype": "$TEST_TRANSPORT", 00:18:07.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.508 "adrfam": "ipv4", 00:18:07.508 "trsvcid": "$NVMF_PORT", 00:18:07.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.508 "hdgst": ${hdgst:-false}, 00:18:07.508 "ddgst": ${ddgst:-false} 00:18:07.508 }, 00:18:07.508 "method": "bdev_nvme_attach_controller" 00:18:07.508 } 00:18:07.508 EOF 00:18:07.508 )") 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:07.508 14:16:34 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:07.508 "params": { 00:18:07.508 "name": "Nvme1", 00:18:07.508 "trtype": "rdma", 00:18:07.508 "traddr": "192.168.100.8", 00:18:07.508 "adrfam": "ipv4", 00:18:07.508 "trsvcid": "4420", 00:18:07.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.508 "hdgst": false, 00:18:07.508 "ddgst": false 00:18:07.508 }, 00:18:07.508 "method": "bdev_nvme_attach_controller" 00:18:07.508 }' 00:18:07.508 [2024-07-24 14:16:34.663702] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:18:07.508 [2024-07-24 14:16:34.663786] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110481 ] 00:18:07.508 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.508 [2024-07-24 14:16:34.740471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.508 [2024-07-24 14:16:34.832204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.508 [2024-07-24 14:16:34.832260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.508 [2024-07-24 14:16:34.832263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.766 I/O targets: 00:18:07.766 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:07.766 00:18:07.766 00:18:07.766 CUnit - A unit testing framework for C - Version 2.1-3 00:18:07.766 http://cunit.sourceforge.net/ 00:18:07.766 00:18:07.766 00:18:07.766 Suite: bdevio tests on: Nvme1n1 00:18:07.766 Test: blockdev write read block ...passed 00:18:07.766 Test: blockdev write zeroes read block ...passed 00:18:07.766 Test: blockdev write zeroes read no split ...passed 00:18:07.766 Test: blockdev write zeroes read split ...passed 00:18:07.766 Test: blockdev write zeroes read split partial ...passed 00:18:07.766 Test: blockdev reset ...[2024-07-24 14:16:35.078636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.766 [2024-07-24 14:16:35.104323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:07.766 [2024-07-24 14:16:35.128668] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:07.766 passed 00:18:07.766 Test: blockdev write read 8 blocks ...passed 00:18:07.766 Test: blockdev write read size > 128k ...passed 00:18:07.766 Test: blockdev write read invalid size ...passed 00:18:07.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:07.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:07.766 Test: blockdev write read max offset ...passed 00:18:07.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:07.766 Test: blockdev writev readv 8 blocks ...passed 00:18:07.766 Test: blockdev writev readv 30 x 1block ...passed 00:18:07.766 Test: blockdev writev readv block ...passed 00:18:07.766 Test: blockdev writev readv size > 128k ...passed 00:18:07.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:07.766 Test: blockdev comparev and writev ...[2024-07-24 14:16:35.132814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.132856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.132876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.132892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.133091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.133113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.133129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.133143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.133341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.133363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.133380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.133395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.133591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.133612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.133628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:07.766 [2024-07-24 14:16:35.133642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:07.766 passed 00:18:07.766 Test: blockdev nvme passthru rw ...passed 00:18:07.766 Test: blockdev nvme passthru vendor specific ...[2024-07-24 14:16:35.134049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:07.766 [2024-07-24 14:16:35.134073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.134135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:07.766 [2024-07-24 14:16:35.134156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.134216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:07.766 [2024-07-24 14:16:35.134237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:07.766 [2024-07-24 14:16:35.134295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:07.766 [2024-07-24 14:16:35.134315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:07.766 passed 00:18:08.024 Test: blockdev nvme admin passthru ...passed 00:18:08.024 Test: blockdev copy ...passed 00:18:08.024 00:18:08.025 Run Summary: Type Total Ran Passed Failed Inactive 00:18:08.025 suites 1 1 n/a 0 0 00:18:08.025 tests 23 23 23 0 0 00:18:08.025 asserts 152 152 152 0 n/a 00:18:08.025 00:18:08.025 Elapsed time = 0.199 seconds 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.025 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:08.025 rmmod nvme_rdma 00:18:08.283 rmmod nvme_fabrics 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 110446 ']' 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 110446 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 110446 ']' 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 110446 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110446 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110446' 00:18:08.283 killing process with pid 110446 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 110446 00:18:08.283 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 110446 00:18:08.542 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.542 14:16:35 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:08.542 00:18:08.542 real 0m4.283s 00:18:08.542 user 0m7.965s 00:18:08.542 sys 0m2.350s 00:18:08.542 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:08.542 14:16:35 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:08.542 ************************************ 00:18:08.542 END TEST nvmf_bdevio 00:18:08.542 ************************************ 00:18:08.542 14:16:35 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:08.542 14:16:35 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:08.542 14:16:35 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:08.542 14:16:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:08.542 ************************************ 00:18:08.542 START TEST nvmf_auth_target 00:18:08.542 ************************************ 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:18:08.542 * Looking for test storage... 00:18:08.542 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.542 14:16:35 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.543 14:16:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:18:11.074 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:18:11.074 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:18:11.074 Found net devices under 0000:81:00.0: mlx_0_0 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:18:11.074 Found net devices under 0000:81:00.1: mlx_0_1 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:11.074 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:11.075 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:11.075 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:18:11.075 altname enp129s0f0np0 00:18:11.075 inet 192.168.100.8/24 scope global mlx_0_0 00:18:11.075 valid_lft forever preferred_lft forever 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:11.075 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:11.075 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:18:11.075 altname enp129s0f1np1 00:18:11.075 inet 192.168.100.9/24 scope global mlx_0_1 00:18:11.075 valid_lft forever preferred_lft forever 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:11.075 192.168.100.9' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:11.075 192.168.100.9' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:11.075 192.168.100.9' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=112661 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 112661 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 112661 ']' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.075 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=112680 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=12d5b7ac5eefd9c5865699502efb5883cb647e1b3cd343bc 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.93s 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 12d5b7ac5eefd9c5865699502efb5883cb647e1b3cd343bc 0 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 12d5b7ac5eefd9c5865699502efb5883cb647e1b3cd343bc 0 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=12d5b7ac5eefd9c5865699502efb5883cb647e1b3cd343bc 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:11.333 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.93s 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.93s 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.93s 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=317266d9d3b6a36b327ea6a58f211f5fa3b91517b22c9ea17705ba5b625a0714 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Z62 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 317266d9d3b6a36b327ea6a58f211f5fa3b91517b22c9ea17705ba5b625a0714 3 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 317266d9d3b6a36b327ea6a58f211f5fa3b91517b22c9ea17705ba5b625a0714 3 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=317266d9d3b6a36b327ea6a58f211f5fa3b91517b22c9ea17705ba5b625a0714 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:11.592 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Z62 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Z62 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Z62 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=451ed63cf752c79daf8671ecb6639f5e 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EZJ 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 451ed63cf752c79daf8671ecb6639f5e 1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 451ed63cf752c79daf8671ecb6639f5e 1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=451ed63cf752c79daf8671ecb6639f5e 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EZJ 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EZJ 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.EZJ 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2a43f7e28944c57316be019a5817d6d5604e13e9a315e584 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Gyj 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2a43f7e28944c57316be019a5817d6d5604e13e9a315e584 2 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2a43f7e28944c57316be019a5817d6d5604e13e9a315e584 2 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2a43f7e28944c57316be019a5817d6d5604e13e9a315e584 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Gyj 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Gyj 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Gyj 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=17e07f95a1b567f60977b3583ab020cc4ec7fcc032efd983 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iTs 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 17e07f95a1b567f60977b3583ab020cc4ec7fcc032efd983 2 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 17e07f95a1b567f60977b3583ab020cc4ec7fcc032efd983 2 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=17e07f95a1b567f60977b3583ab020cc4ec7fcc032efd983 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iTs 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iTs 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.iTs 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4ed7337af179c8c5742138e81f7348c9 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4E9 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4ed7337af179c8c5742138e81f7348c9 1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4ed7337af179c8c5742138e81f7348c9 1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4ed7337af179c8c5742138e81f7348c9 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4E9 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4E9 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.4E9 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b2da7e33b9fdf5da3729fbddd15d8b1463eaa8759d6443abfcfe2b95aefbc883 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.r5l 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b2da7e33b9fdf5da3729fbddd15d8b1463eaa8759d6443abfcfe2b95aefbc883 3 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b2da7e33b9fdf5da3729fbddd15d8b1463eaa8759d6443abfcfe2b95aefbc883 3 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b2da7e33b9fdf5da3729fbddd15d8b1463eaa8759d6443abfcfe2b95aefbc883 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:11.593 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.851 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.r5l 00:18:11.851 14:16:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.r5l 00:18:11.851 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.r5l 00:18:11.851 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:11.851 14:16:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 112661 00:18:11.851 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 112661 ']' 00:18:11.851 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.852 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.852 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.852 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.852 14:16:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 112680 /var/tmp/host.sock 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 112680 ']' 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:12.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:12.109 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.93s 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.93s 00:18:12.367 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.93s 00:18:12.625 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Z62 ]] 00:18:12.625 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z62 00:18:12.625 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.625 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.625 14:16:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.625 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z62 00:18:12.625 14:16:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z62 00:18:12.883 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:12.883 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.EZJ 00:18:12.883 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.883 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.883 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.883 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.EZJ 00:18:12.883 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.EZJ 00:18:13.141 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Gyj ]] 00:18:13.141 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gyj 00:18:13.141 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.141 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.141 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.141 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gyj 00:18:13.141 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Gyj 00:18:13.399 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:13.399 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iTs 00:18:13.399 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.399 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.399 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.399 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iTs 00:18:13.399 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iTs 00:18:13.656 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.4E9 ]] 00:18:13.656 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4E9 00:18:13.656 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.656 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.656 14:16:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.656 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4E9 00:18:13.656 14:16:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4E9 00:18:13.914 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:13.914 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.r5l 00:18:13.914 14:16:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.914 14:16:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.914 14:16:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.914 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.r5l 00:18:13.914 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.r5l 00:18:14.172 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:14.172 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:14.172 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.172 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.172 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.172 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.430 14:16:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.688 00:18:14.945 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.945 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.945 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.945 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.945 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.945 14:16:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.945 14:16:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.203 { 00:18:15.203 "cntlid": 1, 00:18:15.203 "qid": 0, 00:18:15.203 "state": "enabled", 00:18:15.203 "listen_address": { 00:18:15.203 "trtype": "RDMA", 00:18:15.203 "adrfam": "IPv4", 00:18:15.203 "traddr": "192.168.100.8", 00:18:15.203 "trsvcid": "4420" 00:18:15.203 }, 00:18:15.203 "peer_address": { 00:18:15.203 "trtype": "RDMA", 00:18:15.203 "adrfam": "IPv4", 00:18:15.203 "traddr": "192.168.100.8", 00:18:15.203 "trsvcid": "41110" 00:18:15.203 }, 00:18:15.203 "auth": { 00:18:15.203 "state": "completed", 00:18:15.203 "digest": "sha256", 00:18:15.203 "dhgroup": "null" 00:18:15.203 } 00:18:15.203 } 00:18:15.203 ]' 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.203 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.461 14:16:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:18:16.432 14:16:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.690 14:16:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:16.690 14:16:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.690 14:16:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.690 14:16:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.690 14:16:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.690 14:16:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:16.690 14:16:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:16.947 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.948 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.205 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.462 { 00:18:17.462 "cntlid": 3, 00:18:17.462 "qid": 0, 00:18:17.462 "state": "enabled", 00:18:17.462 "listen_address": { 00:18:17.462 "trtype": "RDMA", 00:18:17.462 "adrfam": "IPv4", 00:18:17.462 "traddr": "192.168.100.8", 00:18:17.462 "trsvcid": "4420" 00:18:17.462 }, 00:18:17.462 "peer_address": { 00:18:17.462 "trtype": "RDMA", 00:18:17.462 "adrfam": "IPv4", 00:18:17.462 "traddr": "192.168.100.8", 00:18:17.462 "trsvcid": "50185" 00:18:17.462 }, 00:18:17.462 "auth": { 00:18:17.462 "state": "completed", 00:18:17.462 "digest": "sha256", 00:18:17.462 "dhgroup": "null" 00:18:17.462 } 00:18:17.462 } 00:18:17.462 ]' 00:18:17.462 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.719 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.719 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.719 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.719 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.719 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.719 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.719 14:16:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.977 14:16:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:18:18.909 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.166 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:19.166 14:16:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.166 14:16:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.166 14:16:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.166 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.166 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:19.166 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.423 14:16:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.680 00:18:19.680 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.680 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.680 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.936 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.936 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.936 14:16:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.936 14:16:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.936 14:16:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.936 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.936 { 00:18:19.936 "cntlid": 5, 00:18:19.936 "qid": 0, 00:18:19.936 "state": "enabled", 00:18:19.936 "listen_address": { 00:18:19.936 "trtype": "RDMA", 00:18:19.936 "adrfam": "IPv4", 00:18:19.936 "traddr": "192.168.100.8", 00:18:19.936 "trsvcid": "4420" 00:18:19.936 }, 00:18:19.936 "peer_address": { 00:18:19.936 "trtype": "RDMA", 00:18:19.936 "adrfam": "IPv4", 00:18:19.936 "traddr": "192.168.100.8", 00:18:19.936 "trsvcid": "42407" 00:18:19.936 }, 00:18:19.936 "auth": { 00:18:19.936 "state": "completed", 00:18:19.936 "digest": "sha256", 00:18:19.936 "dhgroup": "null" 00:18:19.936 } 00:18:19.936 } 00:18:19.936 ]' 00:18:19.936 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.194 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.194 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.194 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.194 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.194 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.194 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.194 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.451 14:16:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:18:21.381 14:16:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.639 14:16:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:21.639 14:16:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.639 14:16:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.639 14:16:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.639 14:16:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.639 14:16:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:21.639 14:16:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.897 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.154 00:18:22.154 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.154 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.154 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.412 { 00:18:22.412 "cntlid": 7, 00:18:22.412 "qid": 0, 00:18:22.412 "state": "enabled", 00:18:22.412 "listen_address": { 00:18:22.412 "trtype": "RDMA", 00:18:22.412 "adrfam": "IPv4", 00:18:22.412 "traddr": "192.168.100.8", 00:18:22.412 "trsvcid": "4420" 00:18:22.412 }, 00:18:22.412 "peer_address": { 00:18:22.412 "trtype": "RDMA", 00:18:22.412 "adrfam": "IPv4", 00:18:22.412 "traddr": "192.168.100.8", 00:18:22.412 "trsvcid": "38288" 00:18:22.412 }, 00:18:22.412 "auth": { 00:18:22.412 "state": "completed", 00:18:22.412 "digest": "sha256", 00:18:22.412 "dhgroup": "null" 00:18:22.412 } 00:18:22.412 } 00:18:22.412 ]' 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.412 14:16:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.977 14:16:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:18:23.908 14:16:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.908 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.166 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.423 00:18:24.423 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.423 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.423 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.680 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.680 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.680 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.680 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.680 14:16:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.680 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.680 { 00:18:24.680 "cntlid": 9, 00:18:24.680 "qid": 0, 00:18:24.680 "state": "enabled", 00:18:24.680 "listen_address": { 00:18:24.680 "trtype": "RDMA", 00:18:24.680 "adrfam": "IPv4", 00:18:24.680 "traddr": "192.168.100.8", 00:18:24.680 "trsvcid": "4420" 00:18:24.680 }, 00:18:24.680 "peer_address": { 00:18:24.680 "trtype": "RDMA", 00:18:24.680 "adrfam": "IPv4", 00:18:24.680 "traddr": "192.168.100.8", 00:18:24.680 "trsvcid": "49749" 00:18:24.680 }, 00:18:24.680 "auth": { 00:18:24.680 "state": "completed", 00:18:24.680 "digest": "sha256", 00:18:24.680 "dhgroup": "ffdhe2048" 00:18:24.680 } 00:18:24.680 } 00:18:24.680 ]' 00:18:24.680 14:16:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.680 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.680 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.680 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.680 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.937 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.937 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.937 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.195 14:16:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:18:26.132 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.401 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:26.401 14:16:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.401 14:16:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.401 14:16:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.401 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.401 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.401 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.658 14:16:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.659 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.659 14:16:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.916 00:18:26.916 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.916 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.916 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.173 { 00:18:27.173 "cntlid": 11, 00:18:27.173 "qid": 0, 00:18:27.173 "state": "enabled", 00:18:27.173 "listen_address": { 00:18:27.173 "trtype": "RDMA", 00:18:27.173 "adrfam": "IPv4", 00:18:27.173 "traddr": "192.168.100.8", 00:18:27.173 "trsvcid": "4420" 00:18:27.173 }, 00:18:27.173 "peer_address": { 00:18:27.173 "trtype": "RDMA", 00:18:27.173 "adrfam": "IPv4", 00:18:27.173 "traddr": "192.168.100.8", 00:18:27.173 "trsvcid": "38929" 00:18:27.173 }, 00:18:27.173 "auth": { 00:18:27.173 "state": "completed", 00:18:27.173 "digest": "sha256", 00:18:27.173 "dhgroup": "ffdhe2048" 00:18:27.173 } 00:18:27.173 } 00:18:27.173 ]' 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.173 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.431 14:16:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:28.803 14:16:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.061 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.319 00:18:29.319 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.319 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.319 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.576 { 00:18:29.576 "cntlid": 13, 00:18:29.576 "qid": 0, 00:18:29.576 "state": "enabled", 00:18:29.576 "listen_address": { 00:18:29.576 "trtype": "RDMA", 00:18:29.576 "adrfam": "IPv4", 00:18:29.576 "traddr": "192.168.100.8", 00:18:29.576 "trsvcid": "4420" 00:18:29.576 }, 00:18:29.576 "peer_address": { 00:18:29.576 "trtype": "RDMA", 00:18:29.576 "adrfam": "IPv4", 00:18:29.576 "traddr": "192.168.100.8", 00:18:29.576 "trsvcid": "60241" 00:18:29.576 }, 00:18:29.576 "auth": { 00:18:29.576 "state": "completed", 00:18:29.576 "digest": "sha256", 00:18:29.576 "dhgroup": "ffdhe2048" 00:18:29.576 } 00:18:29.576 } 00:18:29.576 ]' 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.576 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.577 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.577 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.577 14:16:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.141 14:16:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.076 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.371 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.628 00:18:31.628 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.628 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.628 14:16:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.886 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.886 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.886 14:16:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.886 14:16:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.886 14:16:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.886 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.886 { 00:18:31.886 "cntlid": 15, 00:18:31.886 "qid": 0, 00:18:31.886 "state": "enabled", 00:18:31.886 "listen_address": { 00:18:31.886 "trtype": "RDMA", 00:18:31.886 "adrfam": "IPv4", 00:18:31.886 "traddr": "192.168.100.8", 00:18:31.886 "trsvcid": "4420" 00:18:31.886 }, 00:18:31.886 "peer_address": { 00:18:31.886 "trtype": "RDMA", 00:18:31.886 "adrfam": "IPv4", 00:18:31.886 "traddr": "192.168.100.8", 00:18:31.886 "trsvcid": "48986" 00:18:31.886 }, 00:18:31.886 "auth": { 00:18:31.886 "state": "completed", 00:18:31.886 "digest": "sha256", 00:18:31.886 "dhgroup": "ffdhe2048" 00:18:31.886 } 00:18:31.886 } 00:18:31.886 ]' 00:18:31.886 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.143 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.143 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.143 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.143 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.143 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.143 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.143 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.401 14:16:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:18:33.332 14:17:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.589 14:17:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.847 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.105 00:18:34.105 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.105 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.105 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.362 { 00:18:34.362 "cntlid": 17, 00:18:34.362 "qid": 0, 00:18:34.362 "state": "enabled", 00:18:34.362 "listen_address": { 00:18:34.362 "trtype": "RDMA", 00:18:34.362 "adrfam": "IPv4", 00:18:34.362 "traddr": "192.168.100.8", 00:18:34.362 "trsvcid": "4420" 00:18:34.362 }, 00:18:34.362 "peer_address": { 00:18:34.362 "trtype": "RDMA", 00:18:34.362 "adrfam": "IPv4", 00:18:34.362 "traddr": "192.168.100.8", 00:18:34.362 "trsvcid": "44982" 00:18:34.362 }, 00:18:34.362 "auth": { 00:18:34.362 "state": "completed", 00:18:34.362 "digest": "sha256", 00:18:34.362 "dhgroup": "ffdhe3072" 00:18:34.362 } 00:18:34.362 } 00:18:34.362 ]' 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.362 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.620 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.620 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.620 14:17:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.878 14:17:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:18:35.810 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.067 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:36.067 14:17:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.067 14:17:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.067 14:17:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.067 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.067 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.067 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.325 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.582 00:18:36.582 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.582 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.582 14:17:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.841 { 00:18:36.841 "cntlid": 19, 00:18:36.841 "qid": 0, 00:18:36.841 "state": "enabled", 00:18:36.841 "listen_address": { 00:18:36.841 "trtype": "RDMA", 00:18:36.841 "adrfam": "IPv4", 00:18:36.841 "traddr": "192.168.100.8", 00:18:36.841 "trsvcid": "4420" 00:18:36.841 }, 00:18:36.841 "peer_address": { 00:18:36.841 "trtype": "RDMA", 00:18:36.841 "adrfam": "IPv4", 00:18:36.841 "traddr": "192.168.100.8", 00:18:36.841 "trsvcid": "53238" 00:18:36.841 }, 00:18:36.841 "auth": { 00:18:36.841 "state": "completed", 00:18:36.841 "digest": "sha256", 00:18:36.841 "dhgroup": "ffdhe3072" 00:18:36.841 } 00:18:36.841 } 00:18:36.841 ]' 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.841 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.099 14:17:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:38.471 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.729 14:17:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.988 00:18:38.988 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.988 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.988 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.246 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.246 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.246 14:17:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.246 14:17:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.246 14:17:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.246 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.246 { 00:18:39.246 "cntlid": 21, 00:18:39.246 "qid": 0, 00:18:39.246 "state": "enabled", 00:18:39.246 "listen_address": { 00:18:39.246 "trtype": "RDMA", 00:18:39.246 "adrfam": "IPv4", 00:18:39.246 "traddr": "192.168.100.8", 00:18:39.246 "trsvcid": "4420" 00:18:39.246 }, 00:18:39.246 "peer_address": { 00:18:39.246 "trtype": "RDMA", 00:18:39.246 "adrfam": "IPv4", 00:18:39.246 "traddr": "192.168.100.8", 00:18:39.246 "trsvcid": "54937" 00:18:39.246 }, 00:18:39.246 "auth": { 00:18:39.246 "state": "completed", 00:18:39.246 "digest": "sha256", 00:18:39.246 "dhgroup": "ffdhe3072" 00:18:39.246 } 00:18:39.246 } 00:18:39.246 ]' 00:18:39.246 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.503 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.503 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.503 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.503 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.503 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.503 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.503 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.761 14:17:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:18:40.694 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.952 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:40.952 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.952 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.952 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.952 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.952 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:40.952 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.209 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.210 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.467 00:18:41.467 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.467 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.467 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.725 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.725 14:17:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.725 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.725 14:17:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.725 14:17:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.725 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.725 { 00:18:41.725 "cntlid": 23, 00:18:41.725 "qid": 0, 00:18:41.725 "state": "enabled", 00:18:41.725 "listen_address": { 00:18:41.725 "trtype": "RDMA", 00:18:41.725 "adrfam": "IPv4", 00:18:41.725 "traddr": "192.168.100.8", 00:18:41.725 "trsvcid": "4420" 00:18:41.725 }, 00:18:41.725 "peer_address": { 00:18:41.725 "trtype": "RDMA", 00:18:41.725 "adrfam": "IPv4", 00:18:41.725 "traddr": "192.168.100.8", 00:18:41.725 "trsvcid": "35900" 00:18:41.725 }, 00:18:41.725 "auth": { 00:18:41.725 "state": "completed", 00:18:41.725 "digest": "sha256", 00:18:41.725 "dhgroup": "ffdhe3072" 00:18:41.725 } 00:18:41.725 } 00:18:41.725 ]' 00:18:41.725 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.725 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.725 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.725 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.725 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.982 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.982 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.982 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.240 14:17:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:18:43.173 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:43.431 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.689 14:17:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.947 00:18:43.947 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.947 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.947 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.204 { 00:18:44.204 "cntlid": 25, 00:18:44.204 "qid": 0, 00:18:44.204 "state": "enabled", 00:18:44.204 "listen_address": { 00:18:44.204 "trtype": "RDMA", 00:18:44.204 "adrfam": "IPv4", 00:18:44.204 "traddr": "192.168.100.8", 00:18:44.204 "trsvcid": "4420" 00:18:44.204 }, 00:18:44.204 "peer_address": { 00:18:44.204 "trtype": "RDMA", 00:18:44.204 "adrfam": "IPv4", 00:18:44.204 "traddr": "192.168.100.8", 00:18:44.204 "trsvcid": "53424" 00:18:44.204 }, 00:18:44.204 "auth": { 00:18:44.204 "state": "completed", 00:18:44.204 "digest": "sha256", 00:18:44.204 "dhgroup": "ffdhe4096" 00:18:44.204 } 00:18:44.204 } 00:18:44.204 ]' 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.204 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.462 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.462 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.462 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.462 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.462 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.720 14:17:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:18:45.691 14:17:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.949 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:45.949 14:17:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.949 14:17:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.949 14:17:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.949 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.949 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:45.949 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.205 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.461 00:18:46.461 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.461 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.461 14:17:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.718 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.718 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.718 14:17:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.718 14:17:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.718 14:17:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.718 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.718 { 00:18:46.718 "cntlid": 27, 00:18:46.718 "qid": 0, 00:18:46.718 "state": "enabled", 00:18:46.718 "listen_address": { 00:18:46.718 "trtype": "RDMA", 00:18:46.718 "adrfam": "IPv4", 00:18:46.718 "traddr": "192.168.100.8", 00:18:46.718 "trsvcid": "4420" 00:18:46.718 }, 00:18:46.718 "peer_address": { 00:18:46.718 "trtype": "RDMA", 00:18:46.718 "adrfam": "IPv4", 00:18:46.718 "traddr": "192.168.100.8", 00:18:46.718 "trsvcid": "60960" 00:18:46.718 }, 00:18:46.718 "auth": { 00:18:46.718 "state": "completed", 00:18:46.718 "digest": "sha256", 00:18:46.718 "dhgroup": "ffdhe4096" 00:18:46.718 } 00:18:46.718 } 00:18:46.718 ]' 00:18:46.718 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.975 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.975 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.975 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.975 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.975 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.975 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.975 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.232 14:17:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.602 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.860 14:17:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.118 00:18:49.118 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.118 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.118 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.376 { 00:18:49.376 "cntlid": 29, 00:18:49.376 "qid": 0, 00:18:49.376 "state": "enabled", 00:18:49.376 "listen_address": { 00:18:49.376 "trtype": "RDMA", 00:18:49.376 "adrfam": "IPv4", 00:18:49.376 "traddr": "192.168.100.8", 00:18:49.376 "trsvcid": "4420" 00:18:49.376 }, 00:18:49.376 "peer_address": { 00:18:49.376 "trtype": "RDMA", 00:18:49.376 "adrfam": "IPv4", 00:18:49.376 "traddr": "192.168.100.8", 00:18:49.376 "trsvcid": "54463" 00:18:49.376 }, 00:18:49.376 "auth": { 00:18:49.376 "state": "completed", 00:18:49.376 "digest": "sha256", 00:18:49.376 "dhgroup": "ffdhe4096" 00:18:49.376 } 00:18:49.376 } 00:18:49.376 ]' 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.376 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.634 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.634 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.634 14:17:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.892 14:17:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.825 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.083 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.647 00:18:51.648 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.648 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.648 14:17:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.906 { 00:18:51.906 "cntlid": 31, 00:18:51.906 "qid": 0, 00:18:51.906 "state": "enabled", 00:18:51.906 "listen_address": { 00:18:51.906 "trtype": "RDMA", 00:18:51.906 "adrfam": "IPv4", 00:18:51.906 "traddr": "192.168.100.8", 00:18:51.906 "trsvcid": "4420" 00:18:51.906 }, 00:18:51.906 "peer_address": { 00:18:51.906 "trtype": "RDMA", 00:18:51.906 "adrfam": "IPv4", 00:18:51.906 "traddr": "192.168.100.8", 00:18:51.906 "trsvcid": "49746" 00:18:51.906 }, 00:18:51.906 "auth": { 00:18:51.906 "state": "completed", 00:18:51.906 "digest": "sha256", 00:18:51.906 "dhgroup": "ffdhe4096" 00:18:51.906 } 00:18:51.906 } 00:18:51.906 ]' 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.906 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.164 14:17:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:18:53.096 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:53.354 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.612 14:17:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.177 00:18:54.177 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.177 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.177 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.434 { 00:18:54.434 "cntlid": 33, 00:18:54.434 "qid": 0, 00:18:54.434 "state": "enabled", 00:18:54.434 "listen_address": { 00:18:54.434 "trtype": "RDMA", 00:18:54.434 "adrfam": "IPv4", 00:18:54.434 "traddr": "192.168.100.8", 00:18:54.434 "trsvcid": "4420" 00:18:54.434 }, 00:18:54.434 "peer_address": { 00:18:54.434 "trtype": "RDMA", 00:18:54.434 "adrfam": "IPv4", 00:18:54.434 "traddr": "192.168.100.8", 00:18:54.434 "trsvcid": "47089" 00:18:54.434 }, 00:18:54.434 "auth": { 00:18:54.434 "state": "completed", 00:18:54.434 "digest": "sha256", 00:18:54.434 "dhgroup": "ffdhe6144" 00:18:54.434 } 00:18:54.434 } 00:18:54.434 ]' 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.434 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.435 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.435 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.692 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.692 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.692 14:17:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.949 14:17:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:18:55.881 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.881 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:55.881 14:17:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.881 14:17:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.150 14:17:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.716 00:18:56.716 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.716 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.716 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.974 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.974 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.974 14:17:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.974 14:17:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.974 14:17:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.974 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.974 { 00:18:56.974 "cntlid": 35, 00:18:56.974 "qid": 0, 00:18:56.974 "state": "enabled", 00:18:56.974 "listen_address": { 00:18:56.974 "trtype": "RDMA", 00:18:56.974 "adrfam": "IPv4", 00:18:56.974 "traddr": "192.168.100.8", 00:18:56.974 "trsvcid": "4420" 00:18:56.974 }, 00:18:56.974 "peer_address": { 00:18:56.974 "trtype": "RDMA", 00:18:56.974 "adrfam": "IPv4", 00:18:56.974 "traddr": "192.168.100.8", 00:18:56.974 "trsvcid": "60700" 00:18:56.974 }, 00:18:56.974 "auth": { 00:18:56.974 "state": "completed", 00:18:56.974 "digest": "sha256", 00:18:56.974 "dhgroup": "ffdhe6144" 00:18:56.974 } 00:18:56.974 } 00:18:56.974 ]' 00:18:56.974 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.232 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.232 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.232 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.232 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.232 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.232 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.232 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.490 14:17:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.864 14:17:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.864 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.429 00:18:59.429 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.429 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.429 14:17:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.701 { 00:18:59.701 "cntlid": 37, 00:18:59.701 "qid": 0, 00:18:59.701 "state": "enabled", 00:18:59.701 "listen_address": { 00:18:59.701 "trtype": "RDMA", 00:18:59.701 "adrfam": "IPv4", 00:18:59.701 "traddr": "192.168.100.8", 00:18:59.701 "trsvcid": "4420" 00:18:59.701 }, 00:18:59.701 "peer_address": { 00:18:59.701 "trtype": "RDMA", 00:18:59.701 "adrfam": "IPv4", 00:18:59.701 "traddr": "192.168.100.8", 00:18:59.701 "trsvcid": "35570" 00:18:59.701 }, 00:18:59.701 "auth": { 00:18:59.701 "state": "completed", 00:18:59.701 "digest": "sha256", 00:18:59.701 "dhgroup": "ffdhe6144" 00:18:59.701 } 00:18:59.701 } 00:18:59.701 ]' 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.701 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.963 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.963 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.963 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.963 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.963 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.221 14:17:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:19:01.199 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.457 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:01.457 14:17:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.457 14:17:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.457 14:17:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.457 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.457 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.457 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.715 14:17:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.277 00:19:02.277 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.277 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.277 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.532 { 00:19:02.532 "cntlid": 39, 00:19:02.532 "qid": 0, 00:19:02.532 "state": "enabled", 00:19:02.532 "listen_address": { 00:19:02.532 "trtype": "RDMA", 00:19:02.532 "adrfam": "IPv4", 00:19:02.532 "traddr": "192.168.100.8", 00:19:02.532 "trsvcid": "4420" 00:19:02.532 }, 00:19:02.532 "peer_address": { 00:19:02.532 "trtype": "RDMA", 00:19:02.532 "adrfam": "IPv4", 00:19:02.532 "traddr": "192.168.100.8", 00:19:02.532 "trsvcid": "53243" 00:19:02.532 }, 00:19:02.532 "auth": { 00:19:02.532 "state": "completed", 00:19:02.532 "digest": "sha256", 00:19:02.532 "dhgroup": "ffdhe6144" 00:19:02.532 } 00:19:02.532 } 00:19:02.532 ]' 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.532 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.533 14:17:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.790 14:17:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.160 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.418 14:17:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.349 00:19:05.349 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.349 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.349 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.607 { 00:19:05.607 "cntlid": 41, 00:19:05.607 "qid": 0, 00:19:05.607 "state": "enabled", 00:19:05.607 "listen_address": { 00:19:05.607 "trtype": "RDMA", 00:19:05.607 "adrfam": "IPv4", 00:19:05.607 "traddr": "192.168.100.8", 00:19:05.607 "trsvcid": "4420" 00:19:05.607 }, 00:19:05.607 "peer_address": { 00:19:05.607 "trtype": "RDMA", 00:19:05.607 "adrfam": "IPv4", 00:19:05.607 "traddr": "192.168.100.8", 00:19:05.607 "trsvcid": "43012" 00:19:05.607 }, 00:19:05.607 "auth": { 00:19:05.607 "state": "completed", 00:19:05.607 "digest": "sha256", 00:19:05.607 "dhgroup": "ffdhe8192" 00:19:05.607 } 00:19:05.607 } 00:19:05.607 ]' 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.607 14:17:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.864 14:17:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.234 14:17:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.166 00:19:08.166 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.166 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.166 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.423 { 00:19:08.423 "cntlid": 43, 00:19:08.423 "qid": 0, 00:19:08.423 "state": "enabled", 00:19:08.423 "listen_address": { 00:19:08.423 "trtype": "RDMA", 00:19:08.423 "adrfam": "IPv4", 00:19:08.423 "traddr": "192.168.100.8", 00:19:08.423 "trsvcid": "4420" 00:19:08.423 }, 00:19:08.423 "peer_address": { 00:19:08.423 "trtype": "RDMA", 00:19:08.423 "adrfam": "IPv4", 00:19:08.423 "traddr": "192.168.100.8", 00:19:08.423 "trsvcid": "59403" 00:19:08.423 }, 00:19:08.423 "auth": { 00:19:08.423 "state": "completed", 00:19:08.423 "digest": "sha256", 00:19:08.423 "dhgroup": "ffdhe8192" 00:19:08.423 } 00:19:08.423 } 00:19:08.423 ]' 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.423 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.681 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.681 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.681 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.681 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.681 14:17:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.939 14:17:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:19:09.871 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.128 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:10.128 14:17:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.128 14:17:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.128 14:17:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.128 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.128 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:10.129 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.386 14:17:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.318 00:19:11.318 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.318 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.318 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.575 { 00:19:11.575 "cntlid": 45, 00:19:11.575 "qid": 0, 00:19:11.575 "state": "enabled", 00:19:11.575 "listen_address": { 00:19:11.575 "trtype": "RDMA", 00:19:11.575 "adrfam": "IPv4", 00:19:11.575 "traddr": "192.168.100.8", 00:19:11.575 "trsvcid": "4420" 00:19:11.575 }, 00:19:11.575 "peer_address": { 00:19:11.575 "trtype": "RDMA", 00:19:11.575 "adrfam": "IPv4", 00:19:11.575 "traddr": "192.168.100.8", 00:19:11.575 "trsvcid": "45056" 00:19:11.575 }, 00:19:11.575 "auth": { 00:19:11.575 "state": "completed", 00:19:11.575 "digest": "sha256", 00:19:11.575 "dhgroup": "ffdhe8192" 00:19:11.575 } 00:19:11.575 } 00:19:11.575 ]' 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.575 14:17:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.833 14:17:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:13.204 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.461 14:17:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.393 00:19:14.393 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.393 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.393 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.393 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.393 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.393 14:17:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.393 14:17:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.651 { 00:19:14.651 "cntlid": 47, 00:19:14.651 "qid": 0, 00:19:14.651 "state": "enabled", 00:19:14.651 "listen_address": { 00:19:14.651 "trtype": "RDMA", 00:19:14.651 "adrfam": "IPv4", 00:19:14.651 "traddr": "192.168.100.8", 00:19:14.651 "trsvcid": "4420" 00:19:14.651 }, 00:19:14.651 "peer_address": { 00:19:14.651 "trtype": "RDMA", 00:19:14.651 "adrfam": "IPv4", 00:19:14.651 "traddr": "192.168.100.8", 00:19:14.651 "trsvcid": "54692" 00:19:14.651 }, 00:19:14.651 "auth": { 00:19:14.651 "state": "completed", 00:19:14.651 "digest": "sha256", 00:19:14.651 "dhgroup": "ffdhe8192" 00:19:14.651 } 00:19:14.651 } 00:19:14.651 ]' 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.651 14:17:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.910 14:17:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:19:15.843 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.100 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:16.100 14:17:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.100 14:17:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.100 14:17:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.101 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:16.101 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.101 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.101 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.101 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.359 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.619 00:19:16.619 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.619 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.619 14:17:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.908 { 00:19:16.908 "cntlid": 49, 00:19:16.908 "qid": 0, 00:19:16.908 "state": "enabled", 00:19:16.908 "listen_address": { 00:19:16.908 "trtype": "RDMA", 00:19:16.908 "adrfam": "IPv4", 00:19:16.908 "traddr": "192.168.100.8", 00:19:16.908 "trsvcid": "4420" 00:19:16.908 }, 00:19:16.908 "peer_address": { 00:19:16.908 "trtype": "RDMA", 00:19:16.908 "adrfam": "IPv4", 00:19:16.908 "traddr": "192.168.100.8", 00:19:16.908 "trsvcid": "55402" 00:19:16.908 }, 00:19:16.908 "auth": { 00:19:16.908 "state": "completed", 00:19:16.908 "digest": "sha384", 00:19:16.908 "dhgroup": "null" 00:19:16.908 } 00:19:16.908 } 00:19:16.908 ]' 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.908 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.166 14:17:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.539 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.796 14:17:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.054 00:19:19.054 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.054 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.054 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.311 { 00:19:19.311 "cntlid": 51, 00:19:19.311 "qid": 0, 00:19:19.311 "state": "enabled", 00:19:19.311 "listen_address": { 00:19:19.311 "trtype": "RDMA", 00:19:19.311 "adrfam": "IPv4", 00:19:19.311 "traddr": "192.168.100.8", 00:19:19.311 "trsvcid": "4420" 00:19:19.311 }, 00:19:19.311 "peer_address": { 00:19:19.311 "trtype": "RDMA", 00:19:19.311 "adrfam": "IPv4", 00:19:19.311 "traddr": "192.168.100.8", 00:19:19.311 "trsvcid": "54565" 00:19:19.311 }, 00:19:19.311 "auth": { 00:19:19.311 "state": "completed", 00:19:19.311 "digest": "sha384", 00:19:19.311 "dhgroup": "null" 00:19:19.311 } 00:19:19.311 } 00:19:19.311 ]' 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.311 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.569 14:17:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:19:20.941 14:17:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.941 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:20.941 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.941 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.941 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.941 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.941 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:20.941 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.199 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.457 00:19:21.457 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.457 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.457 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.715 { 00:19:21.715 "cntlid": 53, 00:19:21.715 "qid": 0, 00:19:21.715 "state": "enabled", 00:19:21.715 "listen_address": { 00:19:21.715 "trtype": "RDMA", 00:19:21.715 "adrfam": "IPv4", 00:19:21.715 "traddr": "192.168.100.8", 00:19:21.715 "trsvcid": "4420" 00:19:21.715 }, 00:19:21.715 "peer_address": { 00:19:21.715 "trtype": "RDMA", 00:19:21.715 "adrfam": "IPv4", 00:19:21.715 "traddr": "192.168.100.8", 00:19:21.715 "trsvcid": "50945" 00:19:21.715 }, 00:19:21.715 "auth": { 00:19:21.715 "state": "completed", 00:19:21.715 "digest": "sha384", 00:19:21.715 "dhgroup": "null" 00:19:21.715 } 00:19:21.715 } 00:19:21.715 ]' 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.715 14:17:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.973 14:17:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.343 14:17:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.908 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.908 { 00:19:23.908 "cntlid": 55, 00:19:23.908 "qid": 0, 00:19:23.908 "state": "enabled", 00:19:23.908 "listen_address": { 00:19:23.908 "trtype": "RDMA", 00:19:23.908 "adrfam": "IPv4", 00:19:23.908 "traddr": "192.168.100.8", 00:19:23.908 "trsvcid": "4420" 00:19:23.908 }, 00:19:23.908 "peer_address": { 00:19:23.908 "trtype": "RDMA", 00:19:23.908 "adrfam": "IPv4", 00:19:23.908 "traddr": "192.168.100.8", 00:19:23.908 "trsvcid": "38879" 00:19:23.908 }, 00:19:23.908 "auth": { 00:19:23.908 "state": "completed", 00:19:23.908 "digest": "sha384", 00:19:23.908 "dhgroup": "null" 00:19:23.908 } 00:19:23.908 } 00:19:23.908 ]' 00:19:23.908 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.166 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.166 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.166 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.166 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.166 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.166 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.166 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.424 14:17:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:25.795 14:17:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.053 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.310 00:19:26.310 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.310 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.310 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.567 { 00:19:26.567 "cntlid": 57, 00:19:26.567 "qid": 0, 00:19:26.567 "state": "enabled", 00:19:26.567 "listen_address": { 00:19:26.567 "trtype": "RDMA", 00:19:26.567 "adrfam": "IPv4", 00:19:26.567 "traddr": "192.168.100.8", 00:19:26.567 "trsvcid": "4420" 00:19:26.567 }, 00:19:26.567 "peer_address": { 00:19:26.567 "trtype": "RDMA", 00:19:26.567 "adrfam": "IPv4", 00:19:26.567 "traddr": "192.168.100.8", 00:19:26.567 "trsvcid": "60432" 00:19:26.567 }, 00:19:26.567 "auth": { 00:19:26.567 "state": "completed", 00:19:26.567 "digest": "sha384", 00:19:26.567 "dhgroup": "ffdhe2048" 00:19:26.567 } 00:19:26.567 } 00:19:26.567 ]' 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.567 14:17:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.825 14:17:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.197 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.454 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.712 00:19:28.712 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.712 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.712 14:17:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.970 { 00:19:28.970 "cntlid": 59, 00:19:28.970 "qid": 0, 00:19:28.970 "state": "enabled", 00:19:28.970 "listen_address": { 00:19:28.970 "trtype": "RDMA", 00:19:28.970 "adrfam": "IPv4", 00:19:28.970 "traddr": "192.168.100.8", 00:19:28.970 "trsvcid": "4420" 00:19:28.970 }, 00:19:28.970 "peer_address": { 00:19:28.970 "trtype": "RDMA", 00:19:28.970 "adrfam": "IPv4", 00:19:28.970 "traddr": "192.168.100.8", 00:19:28.970 "trsvcid": "33510" 00:19:28.970 }, 00:19:28.970 "auth": { 00:19:28.970 "state": "completed", 00:19:28.970 "digest": "sha384", 00:19:28.970 "dhgroup": "ffdhe2048" 00:19:28.970 } 00:19:28.970 } 00:19:28.970 ]' 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.970 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.228 14:17:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.600 14:17:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.857 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.114 00:19:31.114 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.114 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.114 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.371 { 00:19:31.371 "cntlid": 61, 00:19:31.371 "qid": 0, 00:19:31.371 "state": "enabled", 00:19:31.371 "listen_address": { 00:19:31.371 "trtype": "RDMA", 00:19:31.371 "adrfam": "IPv4", 00:19:31.371 "traddr": "192.168.100.8", 00:19:31.371 "trsvcid": "4420" 00:19:31.371 }, 00:19:31.371 "peer_address": { 00:19:31.371 "trtype": "RDMA", 00:19:31.371 "adrfam": "IPv4", 00:19:31.371 "traddr": "192.168.100.8", 00:19:31.371 "trsvcid": "43323" 00:19:31.371 }, 00:19:31.371 "auth": { 00:19:31.371 "state": "completed", 00:19:31.371 "digest": "sha384", 00:19:31.371 "dhgroup": "ffdhe2048" 00:19:31.371 } 00:19:31.371 } 00:19:31.371 ]' 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.371 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.628 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.628 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.628 14:17:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.885 14:17:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:19:32.827 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.145 14:18:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.403 14:18:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.403 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.403 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.661 00:19:33.661 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.661 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.661 14:18:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.918 { 00:19:33.918 "cntlid": 63, 00:19:33.918 "qid": 0, 00:19:33.918 "state": "enabled", 00:19:33.918 "listen_address": { 00:19:33.918 "trtype": "RDMA", 00:19:33.918 "adrfam": "IPv4", 00:19:33.918 "traddr": "192.168.100.8", 00:19:33.918 "trsvcid": "4420" 00:19:33.918 }, 00:19:33.918 "peer_address": { 00:19:33.918 "trtype": "RDMA", 00:19:33.918 "adrfam": "IPv4", 00:19:33.918 "traddr": "192.168.100.8", 00:19:33.918 "trsvcid": "60447" 00:19:33.918 }, 00:19:33.918 "auth": { 00:19:33.918 "state": "completed", 00:19:33.918 "digest": "sha384", 00:19:33.918 "dhgroup": "ffdhe2048" 00:19:33.918 } 00:19:33.918 } 00:19:33.918 ]' 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.918 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.176 14:18:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:35.550 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:35.807 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.808 14:18:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.065 00:19:36.065 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.065 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.065 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.323 { 00:19:36.323 "cntlid": 65, 00:19:36.323 "qid": 0, 00:19:36.323 "state": "enabled", 00:19:36.323 "listen_address": { 00:19:36.323 "trtype": "RDMA", 00:19:36.323 "adrfam": "IPv4", 00:19:36.323 "traddr": "192.168.100.8", 00:19:36.323 "trsvcid": "4420" 00:19:36.323 }, 00:19:36.323 "peer_address": { 00:19:36.323 "trtype": "RDMA", 00:19:36.323 "adrfam": "IPv4", 00:19:36.323 "traddr": "192.168.100.8", 00:19:36.323 "trsvcid": "50959" 00:19:36.323 }, 00:19:36.323 "auth": { 00:19:36.323 "state": "completed", 00:19:36.323 "digest": "sha384", 00:19:36.323 "dhgroup": "ffdhe3072" 00:19:36.323 } 00:19:36.323 } 00:19:36.323 ]' 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.323 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.580 14:18:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:19:37.950 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.950 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:37.950 14:18:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.950 14:18:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.950 14:18:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.950 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.951 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:37.951 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.208 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.466 00:19:38.466 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.466 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.466 14:18:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.723 { 00:19:38.723 "cntlid": 67, 00:19:38.723 "qid": 0, 00:19:38.723 "state": "enabled", 00:19:38.723 "listen_address": { 00:19:38.723 "trtype": "RDMA", 00:19:38.723 "adrfam": "IPv4", 00:19:38.723 "traddr": "192.168.100.8", 00:19:38.723 "trsvcid": "4420" 00:19:38.723 }, 00:19:38.723 "peer_address": { 00:19:38.723 "trtype": "RDMA", 00:19:38.723 "adrfam": "IPv4", 00:19:38.723 "traddr": "192.168.100.8", 00:19:38.723 "trsvcid": "58007" 00:19:38.723 }, 00:19:38.723 "auth": { 00:19:38.723 "state": "completed", 00:19:38.723 "digest": "sha384", 00:19:38.723 "dhgroup": "ffdhe3072" 00:19:38.723 } 00:19:38.723 } 00:19:38.723 ]' 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.723 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.981 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.981 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.981 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.981 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.981 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.239 14:18:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:19:40.172 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.429 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:40.429 14:18:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.429 14:18:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.429 14:18:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.429 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.429 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.429 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.686 14:18:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.943 00:19:40.943 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.943 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.943 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.201 { 00:19:41.201 "cntlid": 69, 00:19:41.201 "qid": 0, 00:19:41.201 "state": "enabled", 00:19:41.201 "listen_address": { 00:19:41.201 "trtype": "RDMA", 00:19:41.201 "adrfam": "IPv4", 00:19:41.201 "traddr": "192.168.100.8", 00:19:41.201 "trsvcid": "4420" 00:19:41.201 }, 00:19:41.201 "peer_address": { 00:19:41.201 "trtype": "RDMA", 00:19:41.201 "adrfam": "IPv4", 00:19:41.201 "traddr": "192.168.100.8", 00:19:41.201 "trsvcid": "43279" 00:19:41.201 }, 00:19:41.201 "auth": { 00:19:41.201 "state": "completed", 00:19:41.201 "digest": "sha384", 00:19:41.201 "dhgroup": "ffdhe3072" 00:19:41.201 } 00:19:41.201 } 00:19:41.201 ]' 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.201 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.458 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.458 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.458 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.716 14:18:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:19:42.648 14:18:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.905 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:42.905 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.905 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.905 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.905 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.905 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.905 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.163 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.421 00:19:43.421 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.421 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.421 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.678 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.679 { 00:19:43.679 "cntlid": 71, 00:19:43.679 "qid": 0, 00:19:43.679 "state": "enabled", 00:19:43.679 "listen_address": { 00:19:43.679 "trtype": "RDMA", 00:19:43.679 "adrfam": "IPv4", 00:19:43.679 "traddr": "192.168.100.8", 00:19:43.679 "trsvcid": "4420" 00:19:43.679 }, 00:19:43.679 "peer_address": { 00:19:43.679 "trtype": "RDMA", 00:19:43.679 "adrfam": "IPv4", 00:19:43.679 "traddr": "192.168.100.8", 00:19:43.679 "trsvcid": "56025" 00:19:43.679 }, 00:19:43.679 "auth": { 00:19:43.679 "state": "completed", 00:19:43.679 "digest": "sha384", 00:19:43.679 "dhgroup": "ffdhe3072" 00:19:43.679 } 00:19:43.679 } 00:19:43.679 ]' 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.679 14:18:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.679 14:18:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.679 14:18:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.679 14:18:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.936 14:18:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:45.308 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.566 14:18:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.824 00:19:45.824 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.824 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.824 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.090 { 00:19:46.090 "cntlid": 73, 00:19:46.090 "qid": 0, 00:19:46.090 "state": "enabled", 00:19:46.090 "listen_address": { 00:19:46.090 "trtype": "RDMA", 00:19:46.090 "adrfam": "IPv4", 00:19:46.090 "traddr": "192.168.100.8", 00:19:46.090 "trsvcid": "4420" 00:19:46.090 }, 00:19:46.090 "peer_address": { 00:19:46.090 "trtype": "RDMA", 00:19:46.090 "adrfam": "IPv4", 00:19:46.090 "traddr": "192.168.100.8", 00:19:46.090 "trsvcid": "45178" 00:19:46.090 }, 00:19:46.090 "auth": { 00:19:46.090 "state": "completed", 00:19:46.090 "digest": "sha384", 00:19:46.090 "dhgroup": "ffdhe4096" 00:19:46.090 } 00:19:46.090 } 00:19:46.090 ]' 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.090 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.348 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.348 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.348 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.606 14:18:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:19:47.538 14:18:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.795 14:18:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:47.795 14:18:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.795 14:18:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.795 14:18:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.795 14:18:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.795 14:18:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:47.795 14:18:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.059 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.366 00:19:48.366 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.366 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.366 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.622 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.622 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.622 14:18:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.622 14:18:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.622 14:18:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.622 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.622 { 00:19:48.622 "cntlid": 75, 00:19:48.622 "qid": 0, 00:19:48.622 "state": "enabled", 00:19:48.622 "listen_address": { 00:19:48.622 "trtype": "RDMA", 00:19:48.622 "adrfam": "IPv4", 00:19:48.622 "traddr": "192.168.100.8", 00:19:48.622 "trsvcid": "4420" 00:19:48.622 }, 00:19:48.622 "peer_address": { 00:19:48.622 "trtype": "RDMA", 00:19:48.622 "adrfam": "IPv4", 00:19:48.622 "traddr": "192.168.100.8", 00:19:48.623 "trsvcid": "47544" 00:19:48.623 }, 00:19:48.623 "auth": { 00:19:48.623 "state": "completed", 00:19:48.623 "digest": "sha384", 00:19:48.623 "dhgroup": "ffdhe4096" 00:19:48.623 } 00:19:48.623 } 00:19:48.623 ]' 00:19:48.623 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.623 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.623 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.623 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.623 14:18:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.879 14:18:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.879 14:18:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.879 14:18:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.136 14:18:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.072 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.637 14:18:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.894 00:19:50.894 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.894 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.894 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.151 { 00:19:51.151 "cntlid": 77, 00:19:51.151 "qid": 0, 00:19:51.151 "state": "enabled", 00:19:51.151 "listen_address": { 00:19:51.151 "trtype": "RDMA", 00:19:51.151 "adrfam": "IPv4", 00:19:51.151 "traddr": "192.168.100.8", 00:19:51.151 "trsvcid": "4420" 00:19:51.151 }, 00:19:51.151 "peer_address": { 00:19:51.151 "trtype": "RDMA", 00:19:51.151 "adrfam": "IPv4", 00:19:51.151 "traddr": "192.168.100.8", 00:19:51.151 "trsvcid": "48057" 00:19:51.151 }, 00:19:51.151 "auth": { 00:19:51.151 "state": "completed", 00:19:51.151 "digest": "sha384", 00:19:51.151 "dhgroup": "ffdhe4096" 00:19:51.151 } 00:19:51.151 } 00:19:51.151 ]' 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.151 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.407 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.407 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.407 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.407 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.407 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.664 14:18:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:19:52.595 14:18:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.852 14:18:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:52.852 14:18:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.852 14:18:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.852 14:18:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.852 14:18:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.852 14:18:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.852 14:18:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.109 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.366 00:19:53.366 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.366 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.366 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.623 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.623 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.623 14:18:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.623 14:18:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.623 14:18:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.623 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.623 { 00:19:53.623 "cntlid": 79, 00:19:53.623 "qid": 0, 00:19:53.623 "state": "enabled", 00:19:53.623 "listen_address": { 00:19:53.623 "trtype": "RDMA", 00:19:53.623 "adrfam": "IPv4", 00:19:53.623 "traddr": "192.168.100.8", 00:19:53.623 "trsvcid": "4420" 00:19:53.623 }, 00:19:53.623 "peer_address": { 00:19:53.623 "trtype": "RDMA", 00:19:53.623 "adrfam": "IPv4", 00:19:53.623 "traddr": "192.168.100.8", 00:19:53.623 "trsvcid": "58762" 00:19:53.623 }, 00:19:53.623 "auth": { 00:19:53.623 "state": "completed", 00:19:53.623 "digest": "sha384", 00:19:53.623 "dhgroup": "ffdhe4096" 00:19:53.623 } 00:19:53.623 } 00:19:53.623 ]' 00:19:53.624 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.624 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.624 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.624 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.624 14:18:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.881 14:18:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.881 14:18:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.881 14:18:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.138 14:18:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:55.069 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.327 14:18:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.892 00:19:55.892 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.892 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.892 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.150 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.150 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.150 14:18:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.150 14:18:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.150 14:18:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.150 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.150 { 00:19:56.150 "cntlid": 81, 00:19:56.150 "qid": 0, 00:19:56.150 "state": "enabled", 00:19:56.150 "listen_address": { 00:19:56.150 "trtype": "RDMA", 00:19:56.150 "adrfam": "IPv4", 00:19:56.150 "traddr": "192.168.100.8", 00:19:56.150 "trsvcid": "4420" 00:19:56.150 }, 00:19:56.150 "peer_address": { 00:19:56.150 "trtype": "RDMA", 00:19:56.150 "adrfam": "IPv4", 00:19:56.150 "traddr": "192.168.100.8", 00:19:56.150 "trsvcid": "35234" 00:19:56.150 }, 00:19:56.150 "auth": { 00:19:56.150 "state": "completed", 00:19:56.150 "digest": "sha384", 00:19:56.150 "dhgroup": "ffdhe6144" 00:19:56.150 } 00:19:56.150 } 00:19:56.150 ]' 00:19:56.150 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.408 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.408 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.408 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.408 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.408 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.408 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.408 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.665 14:18:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:19:57.598 14:18:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.855 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:19:57.855 14:18:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.855 14:18:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.855 14:18:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.855 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.855 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.855 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.113 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.678 00:19:58.678 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.678 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.678 14:18:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.936 { 00:19:58.936 "cntlid": 83, 00:19:58.936 "qid": 0, 00:19:58.936 "state": "enabled", 00:19:58.936 "listen_address": { 00:19:58.936 "trtype": "RDMA", 00:19:58.936 "adrfam": "IPv4", 00:19:58.936 "traddr": "192.168.100.8", 00:19:58.936 "trsvcid": "4420" 00:19:58.936 }, 00:19:58.936 "peer_address": { 00:19:58.936 "trtype": "RDMA", 00:19:58.936 "adrfam": "IPv4", 00:19:58.936 "traddr": "192.168.100.8", 00:19:58.936 "trsvcid": "59147" 00:19:58.936 }, 00:19:58.936 "auth": { 00:19:58.936 "state": "completed", 00:19:58.936 "digest": "sha384", 00:19:58.936 "dhgroup": "ffdhe6144" 00:19:58.936 } 00:19:58.936 } 00:19:58.936 ]' 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.936 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.194 14:18:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:00.566 14:18:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.824 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.389 00:20:01.389 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.389 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.389 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.647 { 00:20:01.647 "cntlid": 85, 00:20:01.647 "qid": 0, 00:20:01.647 "state": "enabled", 00:20:01.647 "listen_address": { 00:20:01.647 "trtype": "RDMA", 00:20:01.647 "adrfam": "IPv4", 00:20:01.647 "traddr": "192.168.100.8", 00:20:01.647 "trsvcid": "4420" 00:20:01.647 }, 00:20:01.647 "peer_address": { 00:20:01.647 "trtype": "RDMA", 00:20:01.647 "adrfam": "IPv4", 00:20:01.647 "traddr": "192.168.100.8", 00:20:01.647 "trsvcid": "38736" 00:20:01.647 }, 00:20:01.647 "auth": { 00:20:01.647 "state": "completed", 00:20:01.647 "digest": "sha384", 00:20:01.647 "dhgroup": "ffdhe6144" 00:20:01.647 } 00:20:01.647 } 00:20:01.647 ]' 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.647 14:18:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.905 14:18:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.277 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.565 14:18:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.130 00:20:04.130 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.130 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.130 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.388 { 00:20:04.388 "cntlid": 87, 00:20:04.388 "qid": 0, 00:20:04.388 "state": "enabled", 00:20:04.388 "listen_address": { 00:20:04.388 "trtype": "RDMA", 00:20:04.388 "adrfam": "IPv4", 00:20:04.388 "traddr": "192.168.100.8", 00:20:04.388 "trsvcid": "4420" 00:20:04.388 }, 00:20:04.388 "peer_address": { 00:20:04.388 "trtype": "RDMA", 00:20:04.388 "adrfam": "IPv4", 00:20:04.388 "traddr": "192.168.100.8", 00:20:04.388 "trsvcid": "38685" 00:20:04.388 }, 00:20:04.388 "auth": { 00:20:04.388 "state": "completed", 00:20:04.388 "digest": "sha384", 00:20:04.388 "dhgroup": "ffdhe6144" 00:20:04.388 } 00:20:04.388 } 00:20:04.388 ]' 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.388 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.646 14:18:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.018 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.275 14:18:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.207 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.207 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.207 { 00:20:07.207 "cntlid": 89, 00:20:07.207 "qid": 0, 00:20:07.207 "state": "enabled", 00:20:07.207 "listen_address": { 00:20:07.207 "trtype": "RDMA", 00:20:07.207 "adrfam": "IPv4", 00:20:07.207 "traddr": "192.168.100.8", 00:20:07.207 "trsvcid": "4420" 00:20:07.207 }, 00:20:07.207 "peer_address": { 00:20:07.207 "trtype": "RDMA", 00:20:07.207 "adrfam": "IPv4", 00:20:07.207 "traddr": "192.168.100.8", 00:20:07.207 "trsvcid": "49313" 00:20:07.207 }, 00:20:07.207 "auth": { 00:20:07.207 "state": "completed", 00:20:07.207 "digest": "sha384", 00:20:07.207 "dhgroup": "ffdhe8192" 00:20:07.207 } 00:20:07.207 } 00:20:07.207 ]' 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.464 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.721 14:18:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:20:08.654 14:18:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.912 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:08.912 14:18:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.912 14:18:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.912 14:18:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.912 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.912 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.912 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.170 14:18:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.102 00:20:10.102 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.102 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.102 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.360 { 00:20:10.360 "cntlid": 91, 00:20:10.360 "qid": 0, 00:20:10.360 "state": "enabled", 00:20:10.360 "listen_address": { 00:20:10.360 "trtype": "RDMA", 00:20:10.360 "adrfam": "IPv4", 00:20:10.360 "traddr": "192.168.100.8", 00:20:10.360 "trsvcid": "4420" 00:20:10.360 }, 00:20:10.360 "peer_address": { 00:20:10.360 "trtype": "RDMA", 00:20:10.360 "adrfam": "IPv4", 00:20:10.360 "traddr": "192.168.100.8", 00:20:10.360 "trsvcid": "49066" 00:20:10.360 }, 00:20:10.360 "auth": { 00:20:10.360 "state": "completed", 00:20:10.360 "digest": "sha384", 00:20:10.360 "dhgroup": "ffdhe8192" 00:20:10.360 } 00:20:10.360 } 00:20:10.360 ]' 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.360 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.618 14:18:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:20:11.991 14:18:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.991 14:18:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.249 14:18:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.249 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.249 14:18:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.182 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.182 { 00:20:13.182 "cntlid": 93, 00:20:13.182 "qid": 0, 00:20:13.182 "state": "enabled", 00:20:13.182 "listen_address": { 00:20:13.182 "trtype": "RDMA", 00:20:13.182 "adrfam": "IPv4", 00:20:13.182 "traddr": "192.168.100.8", 00:20:13.182 "trsvcid": "4420" 00:20:13.182 }, 00:20:13.182 "peer_address": { 00:20:13.182 "trtype": "RDMA", 00:20:13.182 "adrfam": "IPv4", 00:20:13.182 "traddr": "192.168.100.8", 00:20:13.182 "trsvcid": "49007" 00:20:13.182 }, 00:20:13.182 "auth": { 00:20:13.182 "state": "completed", 00:20:13.182 "digest": "sha384", 00:20:13.182 "dhgroup": "ffdhe8192" 00:20:13.182 } 00:20:13.182 } 00:20:13.182 ]' 00:20:13.182 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.440 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.440 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.440 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.440 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.440 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.440 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.440 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.698 14:18:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:20:14.630 14:18:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.887 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:14.887 14:18:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.887 14:18:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.887 14:18:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.887 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.887 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.887 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.145 14:18:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.076 00:20:16.077 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.077 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.077 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.333 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.333 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.333 14:18:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.333 14:18:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.333 14:18:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.334 { 00:20:16.334 "cntlid": 95, 00:20:16.334 "qid": 0, 00:20:16.334 "state": "enabled", 00:20:16.334 "listen_address": { 00:20:16.334 "trtype": "RDMA", 00:20:16.334 "adrfam": "IPv4", 00:20:16.334 "traddr": "192.168.100.8", 00:20:16.334 "trsvcid": "4420" 00:20:16.334 }, 00:20:16.334 "peer_address": { 00:20:16.334 "trtype": "RDMA", 00:20:16.334 "adrfam": "IPv4", 00:20:16.334 "traddr": "192.168.100.8", 00:20:16.334 "trsvcid": "51472" 00:20:16.334 }, 00:20:16.334 "auth": { 00:20:16.334 "state": "completed", 00:20:16.334 "digest": "sha384", 00:20:16.334 "dhgroup": "ffdhe8192" 00:20:16.334 } 00:20:16.334 } 00:20:16.334 ]' 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.334 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.591 14:18:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:20:17.523 14:18:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:17.781 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.039 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.297 00:20:18.297 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.297 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.297 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.554 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.554 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.554 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.554 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.554 14:18:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.554 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.554 { 00:20:18.554 "cntlid": 97, 00:20:18.554 "qid": 0, 00:20:18.554 "state": "enabled", 00:20:18.554 "listen_address": { 00:20:18.554 "trtype": "RDMA", 00:20:18.554 "adrfam": "IPv4", 00:20:18.554 "traddr": "192.168.100.8", 00:20:18.554 "trsvcid": "4420" 00:20:18.554 }, 00:20:18.554 "peer_address": { 00:20:18.554 "trtype": "RDMA", 00:20:18.554 "adrfam": "IPv4", 00:20:18.554 "traddr": "192.168.100.8", 00:20:18.554 "trsvcid": "36999" 00:20:18.554 }, 00:20:18.554 "auth": { 00:20:18.554 "state": "completed", 00:20:18.554 "digest": "sha512", 00:20:18.554 "dhgroup": "null" 00:20:18.554 } 00:20:18.554 } 00:20:18.554 ]' 00:20:18.554 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.811 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.811 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.811 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:18.811 14:18:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.811 14:18:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.811 14:18:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.811 14:18:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.070 14:18:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:20:20.045 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.303 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:20.303 14:18:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.303 14:18:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.303 14:18:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.303 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.303 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:20.303 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.561 14:18:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.819 00:20:20.819 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.819 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.819 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.076 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.076 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.076 14:18:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.076 14:18:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.076 14:18:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.076 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.076 { 00:20:21.076 "cntlid": 99, 00:20:21.076 "qid": 0, 00:20:21.076 "state": "enabled", 00:20:21.076 "listen_address": { 00:20:21.076 "trtype": "RDMA", 00:20:21.076 "adrfam": "IPv4", 00:20:21.076 "traddr": "192.168.100.8", 00:20:21.076 "trsvcid": "4420" 00:20:21.076 }, 00:20:21.076 "peer_address": { 00:20:21.076 "trtype": "RDMA", 00:20:21.076 "adrfam": "IPv4", 00:20:21.076 "traddr": "192.168.100.8", 00:20:21.076 "trsvcid": "55731" 00:20:21.076 }, 00:20:21.076 "auth": { 00:20:21.077 "state": "completed", 00:20:21.077 "digest": "sha512", 00:20:21.077 "dhgroup": "null" 00:20:21.077 } 00:20:21.077 } 00:20:21.077 ]' 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.077 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.334 14:18:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:20:22.706 14:18:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.706 14:18:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:22.706 14:18:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.706 14:18:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.706 14:18:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.706 14:18:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.706 14:18:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.707 14:18:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.707 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.272 00:20:23.272 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.272 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.272 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.272 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.272 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.272 14:18:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.272 14:18:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.530 { 00:20:23.530 "cntlid": 101, 00:20:23.530 "qid": 0, 00:20:23.530 "state": "enabled", 00:20:23.530 "listen_address": { 00:20:23.530 "trtype": "RDMA", 00:20:23.530 "adrfam": "IPv4", 00:20:23.530 "traddr": "192.168.100.8", 00:20:23.530 "trsvcid": "4420" 00:20:23.530 }, 00:20:23.530 "peer_address": { 00:20:23.530 "trtype": "RDMA", 00:20:23.530 "adrfam": "IPv4", 00:20:23.530 "traddr": "192.168.100.8", 00:20:23.530 "trsvcid": "50552" 00:20:23.530 }, 00:20:23.530 "auth": { 00:20:23.530 "state": "completed", 00:20:23.530 "digest": "sha512", 00:20:23.530 "dhgroup": "null" 00:20:23.530 } 00:20:23.530 } 00:20:23.530 ]' 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.530 14:18:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.788 14:18:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:20:24.720 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.978 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:24.978 14:18:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.978 14:18:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.978 14:18:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.978 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.978 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:24.978 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.236 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.493 00:20:25.493 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.493 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.493 14:18:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.751 { 00:20:25.751 "cntlid": 103, 00:20:25.751 "qid": 0, 00:20:25.751 "state": "enabled", 00:20:25.751 "listen_address": { 00:20:25.751 "trtype": "RDMA", 00:20:25.751 "adrfam": "IPv4", 00:20:25.751 "traddr": "192.168.100.8", 00:20:25.751 "trsvcid": "4420" 00:20:25.751 }, 00:20:25.751 "peer_address": { 00:20:25.751 "trtype": "RDMA", 00:20:25.751 "adrfam": "IPv4", 00:20:25.751 "traddr": "192.168.100.8", 00:20:25.751 "trsvcid": "40065" 00:20:25.751 }, 00:20:25.751 "auth": { 00:20:25.751 "state": "completed", 00:20:25.751 "digest": "sha512", 00:20:25.751 "dhgroup": "null" 00:20:25.751 } 00:20:25.751 } 00:20:25.751 ]' 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.751 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:26.008 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.008 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.008 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.009 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.265 14:18:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.197 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.455 14:18:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.020 00:20:28.020 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.020 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.020 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.020 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.020 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.020 14:18:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.020 14:18:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.277 14:18:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.277 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.278 { 00:20:28.278 "cntlid": 105, 00:20:28.278 "qid": 0, 00:20:28.278 "state": "enabled", 00:20:28.278 "listen_address": { 00:20:28.278 "trtype": "RDMA", 00:20:28.278 "adrfam": "IPv4", 00:20:28.278 "traddr": "192.168.100.8", 00:20:28.278 "trsvcid": "4420" 00:20:28.278 }, 00:20:28.278 "peer_address": { 00:20:28.278 "trtype": "RDMA", 00:20:28.278 "adrfam": "IPv4", 00:20:28.278 "traddr": "192.168.100.8", 00:20:28.278 "trsvcid": "45691" 00:20:28.278 }, 00:20:28.278 "auth": { 00:20:28.278 "state": "completed", 00:20:28.278 "digest": "sha512", 00:20:28.278 "dhgroup": "ffdhe2048" 00:20:28.278 } 00:20:28.278 } 00:20:28.278 ]' 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.278 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.536 14:18:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.910 14:18:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.910 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.475 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.475 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.475 { 00:20:30.475 "cntlid": 107, 00:20:30.475 "qid": 0, 00:20:30.475 "state": "enabled", 00:20:30.475 "listen_address": { 00:20:30.475 "trtype": "RDMA", 00:20:30.475 "adrfam": "IPv4", 00:20:30.475 "traddr": "192.168.100.8", 00:20:30.475 "trsvcid": "4420" 00:20:30.475 }, 00:20:30.475 "peer_address": { 00:20:30.475 "trtype": "RDMA", 00:20:30.475 "adrfam": "IPv4", 00:20:30.475 "traddr": "192.168.100.8", 00:20:30.475 "trsvcid": "32914" 00:20:30.475 }, 00:20:30.475 "auth": { 00:20:30.475 "state": "completed", 00:20:30.475 "digest": "sha512", 00:20:30.475 "dhgroup": "ffdhe2048" 00:20:30.475 } 00:20:30.475 } 00:20:30.475 ]' 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.733 14:18:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.991 14:18:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:20:31.925 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.183 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:32.183 14:18:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.183 14:18:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.183 14:18:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.183 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.183 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.183 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.441 14:18:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.700 00:20:32.700 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.700 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.700 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.957 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.957 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.957 14:19:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.957 14:19:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.957 14:19:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.957 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.957 { 00:20:32.957 "cntlid": 109, 00:20:32.957 "qid": 0, 00:20:32.957 "state": "enabled", 00:20:32.957 "listen_address": { 00:20:32.958 "trtype": "RDMA", 00:20:32.958 "adrfam": "IPv4", 00:20:32.958 "traddr": "192.168.100.8", 00:20:32.958 "trsvcid": "4420" 00:20:32.958 }, 00:20:32.958 "peer_address": { 00:20:32.958 "trtype": "RDMA", 00:20:32.958 "adrfam": "IPv4", 00:20:32.958 "traddr": "192.168.100.8", 00:20:32.958 "trsvcid": "45149" 00:20:32.958 }, 00:20:32.958 "auth": { 00:20:32.958 "state": "completed", 00:20:32.958 "digest": "sha512", 00:20:32.958 "dhgroup": "ffdhe2048" 00:20:32.958 } 00:20:32.958 } 00:20:32.958 ]' 00:20:32.958 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.215 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.215 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.215 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.215 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.215 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.215 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.215 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.473 14:19:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:20:34.459 14:19:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.717 14:19:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:34.717 14:19:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.717 14:19:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.717 14:19:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.717 14:19:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.717 14:19:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.717 14:19:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.975 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.233 00:20:35.233 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.233 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.233 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.491 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.491 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.491 14:19:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.491 14:19:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.492 14:19:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.492 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.492 { 00:20:35.492 "cntlid": 111, 00:20:35.492 "qid": 0, 00:20:35.492 "state": "enabled", 00:20:35.492 "listen_address": { 00:20:35.492 "trtype": "RDMA", 00:20:35.492 "adrfam": "IPv4", 00:20:35.492 "traddr": "192.168.100.8", 00:20:35.492 "trsvcid": "4420" 00:20:35.492 }, 00:20:35.492 "peer_address": { 00:20:35.492 "trtype": "RDMA", 00:20:35.492 "adrfam": "IPv4", 00:20:35.492 "traddr": "192.168.100.8", 00:20:35.492 "trsvcid": "39308" 00:20:35.492 }, 00:20:35.492 "auth": { 00:20:35.492 "state": "completed", 00:20:35.492 "digest": "sha512", 00:20:35.492 "dhgroup": "ffdhe2048" 00:20:35.492 } 00:20:35.492 } 00:20:35.492 ]' 00:20:35.492 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.492 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.492 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.492 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.492 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.749 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.749 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.749 14:19:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.007 14:19:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:20:36.938 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.197 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.455 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:37.455 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.456 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.714 00:20:37.714 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.714 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.714 14:19:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.973 { 00:20:37.973 "cntlid": 113, 00:20:37.973 "qid": 0, 00:20:37.973 "state": "enabled", 00:20:37.973 "listen_address": { 00:20:37.973 "trtype": "RDMA", 00:20:37.973 "adrfam": "IPv4", 00:20:37.973 "traddr": "192.168.100.8", 00:20:37.973 "trsvcid": "4420" 00:20:37.973 }, 00:20:37.973 "peer_address": { 00:20:37.973 "trtype": "RDMA", 00:20:37.973 "adrfam": "IPv4", 00:20:37.973 "traddr": "192.168.100.8", 00:20:37.973 "trsvcid": "60810" 00:20:37.973 }, 00:20:37.973 "auth": { 00:20:37.973 "state": "completed", 00:20:37.973 "digest": "sha512", 00:20:37.973 "dhgroup": "ffdhe3072" 00:20:37.973 } 00:20:37.973 } 00:20:37.973 ]' 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.973 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.231 14:19:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.608 14:19:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.867 14:19:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.867 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.867 14:19:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.125 00:20:40.125 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.125 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.125 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.385 { 00:20:40.385 "cntlid": 115, 00:20:40.385 "qid": 0, 00:20:40.385 "state": "enabled", 00:20:40.385 "listen_address": { 00:20:40.385 "trtype": "RDMA", 00:20:40.385 "adrfam": "IPv4", 00:20:40.385 "traddr": "192.168.100.8", 00:20:40.385 "trsvcid": "4420" 00:20:40.385 }, 00:20:40.385 "peer_address": { 00:20:40.385 "trtype": "RDMA", 00:20:40.385 "adrfam": "IPv4", 00:20:40.385 "traddr": "192.168.100.8", 00:20:40.385 "trsvcid": "45453" 00:20:40.385 }, 00:20:40.385 "auth": { 00:20:40.385 "state": "completed", 00:20:40.385 "digest": "sha512", 00:20:40.385 "dhgroup": "ffdhe3072" 00:20:40.385 } 00:20:40.385 } 00:20:40.385 ]' 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.385 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.643 14:19:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.019 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:42.277 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:42.277 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.277 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.277 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.277 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:42.277 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.278 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.278 14:19:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.278 14:19:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.278 14:19:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.278 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.278 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.536 00:20:42.536 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.536 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.536 14:19:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.794 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.794 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.794 14:19:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.794 14:19:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.794 14:19:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.794 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.794 { 00:20:42.794 "cntlid": 117, 00:20:42.794 "qid": 0, 00:20:42.794 "state": "enabled", 00:20:42.794 "listen_address": { 00:20:42.794 "trtype": "RDMA", 00:20:42.794 "adrfam": "IPv4", 00:20:42.794 "traddr": "192.168.100.8", 00:20:42.794 "trsvcid": "4420" 00:20:42.794 }, 00:20:42.794 "peer_address": { 00:20:42.794 "trtype": "RDMA", 00:20:42.794 "adrfam": "IPv4", 00:20:42.794 "traddr": "192.168.100.8", 00:20:42.794 "trsvcid": "36142" 00:20:42.794 }, 00:20:42.794 "auth": { 00:20:42.794 "state": "completed", 00:20:42.795 "digest": "sha512", 00:20:42.795 "dhgroup": "ffdhe3072" 00:20:42.795 } 00:20:42.795 } 00:20:42.795 ]' 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.795 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.052 14:19:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.428 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.686 14:19:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.945 00:20:44.945 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.945 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.945 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.203 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.203 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.203 14:19:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.203 14:19:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.203 14:19:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.203 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.203 { 00:20:45.203 "cntlid": 119, 00:20:45.203 "qid": 0, 00:20:45.203 "state": "enabled", 00:20:45.203 "listen_address": { 00:20:45.203 "trtype": "RDMA", 00:20:45.203 "adrfam": "IPv4", 00:20:45.203 "traddr": "192.168.100.8", 00:20:45.203 "trsvcid": "4420" 00:20:45.203 }, 00:20:45.203 "peer_address": { 00:20:45.203 "trtype": "RDMA", 00:20:45.203 "adrfam": "IPv4", 00:20:45.203 "traddr": "192.168.100.8", 00:20:45.203 "trsvcid": "44089" 00:20:45.203 }, 00:20:45.203 "auth": { 00:20:45.203 "state": "completed", 00:20:45.203 "digest": "sha512", 00:20:45.203 "dhgroup": "ffdhe3072" 00:20:45.203 } 00:20:45.203 } 00:20:45.203 ]' 00:20:45.203 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.461 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.461 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.461 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.461 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.461 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.462 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.462 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.720 14:19:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:20:46.654 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:46.912 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.170 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.426 00:20:47.427 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.427 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.427 14:19:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.686 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.686 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.686 14:19:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.686 14:19:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.686 14:19:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.686 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.686 { 00:20:47.686 "cntlid": 121, 00:20:47.686 "qid": 0, 00:20:47.686 "state": "enabled", 00:20:47.686 "listen_address": { 00:20:47.686 "trtype": "RDMA", 00:20:47.686 "adrfam": "IPv4", 00:20:47.686 "traddr": "192.168.100.8", 00:20:47.686 "trsvcid": "4420" 00:20:47.686 }, 00:20:47.686 "peer_address": { 00:20:47.686 "trtype": "RDMA", 00:20:47.686 "adrfam": "IPv4", 00:20:47.686 "traddr": "192.168.100.8", 00:20:47.686 "trsvcid": "47043" 00:20:47.686 }, 00:20:47.686 "auth": { 00:20:47.686 "state": "completed", 00:20:47.686 "digest": "sha512", 00:20:47.686 "dhgroup": "ffdhe4096" 00:20:47.686 } 00:20:47.686 } 00:20:47.686 ]' 00:20:47.686 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.974 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.974 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.974 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.974 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.974 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.974 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.974 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.231 14:19:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:20:49.165 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.423 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:49.423 14:19:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.423 14:19:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.423 14:19:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.423 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.423 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:49.423 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:49.680 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:49.680 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.680 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.681 14:19:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.939 00:20:49.939 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.939 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.939 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.197 { 00:20:50.197 "cntlid": 123, 00:20:50.197 "qid": 0, 00:20:50.197 "state": "enabled", 00:20:50.197 "listen_address": { 00:20:50.197 "trtype": "RDMA", 00:20:50.197 "adrfam": "IPv4", 00:20:50.197 "traddr": "192.168.100.8", 00:20:50.197 "trsvcid": "4420" 00:20:50.197 }, 00:20:50.197 "peer_address": { 00:20:50.197 "trtype": "RDMA", 00:20:50.197 "adrfam": "IPv4", 00:20:50.197 "traddr": "192.168.100.8", 00:20:50.197 "trsvcid": "34433" 00:20:50.197 }, 00:20:50.197 "auth": { 00:20:50.197 "state": "completed", 00:20:50.197 "digest": "sha512", 00:20:50.197 "dhgroup": "ffdhe4096" 00:20:50.197 } 00:20:50.197 } 00:20:50.197 ]' 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.197 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.454 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.454 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.454 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.711 14:19:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:20:51.643 14:19:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.929 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:51.929 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.929 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.929 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.929 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.929 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:51.929 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.186 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.443 00:20:52.443 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.443 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.443 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.700 { 00:20:52.700 "cntlid": 125, 00:20:52.700 "qid": 0, 00:20:52.700 "state": "enabled", 00:20:52.700 "listen_address": { 00:20:52.700 "trtype": "RDMA", 00:20:52.700 "adrfam": "IPv4", 00:20:52.700 "traddr": "192.168.100.8", 00:20:52.700 "trsvcid": "4420" 00:20:52.700 }, 00:20:52.700 "peer_address": { 00:20:52.700 "trtype": "RDMA", 00:20:52.700 "adrfam": "IPv4", 00:20:52.700 "traddr": "192.168.100.8", 00:20:52.700 "trsvcid": "44458" 00:20:52.700 }, 00:20:52.700 "auth": { 00:20:52.700 "state": "completed", 00:20:52.700 "digest": "sha512", 00:20:52.700 "dhgroup": "ffdhe4096" 00:20:52.700 } 00:20:52.700 } 00:20:52.700 ]' 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.700 14:19:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.700 14:19:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.700 14:19:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.957 14:19:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.957 14:19:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.957 14:19:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.214 14:19:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:20:54.146 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.146 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:54.146 14:19:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.146 14:19:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.406 14:19:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.971 00:20:54.971 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.971 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.971 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.229 { 00:20:55.229 "cntlid": 127, 00:20:55.229 "qid": 0, 00:20:55.229 "state": "enabled", 00:20:55.229 "listen_address": { 00:20:55.229 "trtype": "RDMA", 00:20:55.229 "adrfam": "IPv4", 00:20:55.229 "traddr": "192.168.100.8", 00:20:55.229 "trsvcid": "4420" 00:20:55.229 }, 00:20:55.229 "peer_address": { 00:20:55.229 "trtype": "RDMA", 00:20:55.229 "adrfam": "IPv4", 00:20:55.229 "traddr": "192.168.100.8", 00:20:55.229 "trsvcid": "42106" 00:20:55.229 }, 00:20:55.229 "auth": { 00:20:55.229 "state": "completed", 00:20:55.229 "digest": "sha512", 00:20:55.229 "dhgroup": "ffdhe4096" 00:20:55.229 } 00:20:55.229 } 00:20:55.229 ]' 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.229 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.486 14:19:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:56.858 14:19:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.858 14:19:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.117 14:19:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.117 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.117 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.682 00:20:57.682 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.682 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.682 14:19:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.682 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.682 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.682 14:19:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.682 14:19:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.939 { 00:20:57.939 "cntlid": 129, 00:20:57.939 "qid": 0, 00:20:57.939 "state": "enabled", 00:20:57.939 "listen_address": { 00:20:57.939 "trtype": "RDMA", 00:20:57.939 "adrfam": "IPv4", 00:20:57.939 "traddr": "192.168.100.8", 00:20:57.939 "trsvcid": "4420" 00:20:57.939 }, 00:20:57.939 "peer_address": { 00:20:57.939 "trtype": "RDMA", 00:20:57.939 "adrfam": "IPv4", 00:20:57.939 "traddr": "192.168.100.8", 00:20:57.939 "trsvcid": "54415" 00:20:57.939 }, 00:20:57.939 "auth": { 00:20:57.939 "state": "completed", 00:20:57.939 "digest": "sha512", 00:20:57.939 "dhgroup": "ffdhe6144" 00:20:57.939 } 00:20:57.939 } 00:20:57.939 ]' 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.939 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.196 14:19:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:20:59.129 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.387 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:20:59.387 14:19:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.387 14:19:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.387 14:19:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.387 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.387 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.387 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.645 14:19:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.210 00:21:00.210 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.210 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.211 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.468 { 00:21:00.468 "cntlid": 131, 00:21:00.468 "qid": 0, 00:21:00.468 "state": "enabled", 00:21:00.468 "listen_address": { 00:21:00.468 "trtype": "RDMA", 00:21:00.468 "adrfam": "IPv4", 00:21:00.468 "traddr": "192.168.100.8", 00:21:00.468 "trsvcid": "4420" 00:21:00.468 }, 00:21:00.468 "peer_address": { 00:21:00.468 "trtype": "RDMA", 00:21:00.468 "adrfam": "IPv4", 00:21:00.468 "traddr": "192.168.100.8", 00:21:00.468 "trsvcid": "34234" 00:21:00.468 }, 00:21:00.468 "auth": { 00:21:00.468 "state": "completed", 00:21:00.468 "digest": "sha512", 00:21:00.468 "dhgroup": "ffdhe6144" 00:21:00.468 } 00:21:00.468 } 00:21:00.468 ]' 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.468 14:19:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.726 14:19:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.159 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.415 14:19:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.981 00:21:02.981 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.981 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.981 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.981 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.981 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.981 14:19:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.981 14:19:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.240 { 00:21:03.240 "cntlid": 133, 00:21:03.240 "qid": 0, 00:21:03.240 "state": "enabled", 00:21:03.240 "listen_address": { 00:21:03.240 "trtype": "RDMA", 00:21:03.240 "adrfam": "IPv4", 00:21:03.240 "traddr": "192.168.100.8", 00:21:03.240 "trsvcid": "4420" 00:21:03.240 }, 00:21:03.240 "peer_address": { 00:21:03.240 "trtype": "RDMA", 00:21:03.240 "adrfam": "IPv4", 00:21:03.240 "traddr": "192.168.100.8", 00:21:03.240 "trsvcid": "36699" 00:21:03.240 }, 00:21:03.240 "auth": { 00:21:03.240 "state": "completed", 00:21:03.240 "digest": "sha512", 00:21:03.240 "dhgroup": "ffdhe6144" 00:21:03.240 } 00:21:03.240 } 00:21:03.240 ]' 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.240 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.498 14:19:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.871 14:19:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.871 14:19:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.129 14:19:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.129 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.129 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.695 00:21:05.695 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.695 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.695 14:19:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.695 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.696 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.696 14:19:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.696 14:19:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.696 14:19:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.696 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.696 { 00:21:05.696 "cntlid": 135, 00:21:05.696 "qid": 0, 00:21:05.696 "state": "enabled", 00:21:05.696 "listen_address": { 00:21:05.696 "trtype": "RDMA", 00:21:05.696 "adrfam": "IPv4", 00:21:05.696 "traddr": "192.168.100.8", 00:21:05.696 "trsvcid": "4420" 00:21:05.696 }, 00:21:05.696 "peer_address": { 00:21:05.696 "trtype": "RDMA", 00:21:05.696 "adrfam": "IPv4", 00:21:05.696 "traddr": "192.168.100.8", 00:21:05.696 "trsvcid": "40548" 00:21:05.696 }, 00:21:05.696 "auth": { 00:21:05.696 "state": "completed", 00:21:05.696 "digest": "sha512", 00:21:05.696 "dhgroup": "ffdhe6144" 00:21:05.696 } 00:21:05.696 } 00:21:05.696 ]' 00:21:05.696 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.953 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.953 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.953 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.953 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.953 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.953 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.953 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.217 14:19:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:21:07.150 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:07.407 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.665 14:19:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.597 00:21:08.597 14:19:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.597 14:19:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.597 14:19:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.855 14:19:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.855 14:19:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.855 { 00:21:08.855 "cntlid": 137, 00:21:08.855 "qid": 0, 00:21:08.855 "state": "enabled", 00:21:08.855 "listen_address": { 00:21:08.855 "trtype": "RDMA", 00:21:08.855 "adrfam": "IPv4", 00:21:08.855 "traddr": "192.168.100.8", 00:21:08.855 "trsvcid": "4420" 00:21:08.855 }, 00:21:08.855 "peer_address": { 00:21:08.855 "trtype": "RDMA", 00:21:08.855 "adrfam": "IPv4", 00:21:08.855 "traddr": "192.168.100.8", 00:21:08.855 "trsvcid": "49351" 00:21:08.855 }, 00:21:08.855 "auth": { 00:21:08.855 "state": "completed", 00:21:08.855 "digest": "sha512", 00:21:08.855 "dhgroup": "ffdhe8192" 00:21:08.855 } 00:21:08.855 } 00:21:08.855 ]' 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.855 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.113 14:19:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.484 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.741 14:19:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.674 00:21:11.674 14:19:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.674 14:19:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.674 14:19:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.674 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.674 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.674 14:19:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.674 14:19:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.674 14:19:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.674 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.674 { 00:21:11.674 "cntlid": 139, 00:21:11.674 "qid": 0, 00:21:11.674 "state": "enabled", 00:21:11.674 "listen_address": { 00:21:11.674 "trtype": "RDMA", 00:21:11.674 "adrfam": "IPv4", 00:21:11.674 "traddr": "192.168.100.8", 00:21:11.674 "trsvcid": "4420" 00:21:11.674 }, 00:21:11.674 "peer_address": { 00:21:11.674 "trtype": "RDMA", 00:21:11.674 "adrfam": "IPv4", 00:21:11.674 "traddr": "192.168.100.8", 00:21:11.674 "trsvcid": "46380" 00:21:11.674 }, 00:21:11.674 "auth": { 00:21:11.674 "state": "completed", 00:21:11.674 "digest": "sha512", 00:21:11.674 "dhgroup": "ffdhe8192" 00:21:11.674 } 00:21:11.674 } 00:21:11.674 ]' 00:21:11.674 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.932 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.932 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.932 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.932 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.932 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.932 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.932 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.190 14:19:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:01:NDUxZWQ2M2NmNzUyYzc5ZGFmODY3MWVjYjY2MzlmNWVZ/epb: --dhchap-ctrl-secret DHHC-1:02:MmE0M2Y3ZTI4OTQ0YzU3MzE2YmUwMTlhNTgxN2Q2ZDU2MDRlMTNlOWEzMTVlNTg06IkDFg==: 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.562 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.821 14:19:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.754 00:21:14.754 14:19:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.754 14:19:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.754 14:19:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.754 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.754 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.754 14:19:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.754 14:19:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.012 14:19:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.012 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.012 { 00:21:15.012 "cntlid": 141, 00:21:15.012 "qid": 0, 00:21:15.012 "state": "enabled", 00:21:15.012 "listen_address": { 00:21:15.012 "trtype": "RDMA", 00:21:15.012 "adrfam": "IPv4", 00:21:15.012 "traddr": "192.168.100.8", 00:21:15.012 "trsvcid": "4420" 00:21:15.012 }, 00:21:15.013 "peer_address": { 00:21:15.013 "trtype": "RDMA", 00:21:15.013 "adrfam": "IPv4", 00:21:15.013 "traddr": "192.168.100.8", 00:21:15.013 "trsvcid": "46262" 00:21:15.013 }, 00:21:15.013 "auth": { 00:21:15.013 "state": "completed", 00:21:15.013 "digest": "sha512", 00:21:15.013 "dhgroup": "ffdhe8192" 00:21:15.013 } 00:21:15.013 } 00:21:15.013 ]' 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.013 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.271 14:19:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:02:MTdlMDdmOTVhMWI1NjdmNjA5NzdiMzU4M2FiMDIwY2M0ZWM3ZmNjMDMyZWZkOTgzuoDADg==: --dhchap-ctrl-secret DHHC-1:01:NGVkNzMzN2FmMTc5YzhjNTc0MjEzOGU4MWY3MzQ4YzlGMVrI: 00:21:16.208 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.468 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:16.468 14:19:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.468 14:19:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.468 14:19:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.468 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.468 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.468 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.756 14:19:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.689 00:21:17.689 14:19:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.689 14:19:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.689 14:19:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.947 { 00:21:17.947 "cntlid": 143, 00:21:17.947 "qid": 0, 00:21:17.947 "state": "enabled", 00:21:17.947 "listen_address": { 00:21:17.947 "trtype": "RDMA", 00:21:17.947 "adrfam": "IPv4", 00:21:17.947 "traddr": "192.168.100.8", 00:21:17.947 "trsvcid": "4420" 00:21:17.947 }, 00:21:17.947 "peer_address": { 00:21:17.947 "trtype": "RDMA", 00:21:17.947 "adrfam": "IPv4", 00:21:17.947 "traddr": "192.168.100.8", 00:21:17.947 "trsvcid": "40092" 00:21:17.947 }, 00:21:17.947 "auth": { 00:21:17.947 "state": "completed", 00:21:17.947 "digest": "sha512", 00:21:17.947 "dhgroup": "ffdhe8192" 00:21:17.947 } 00:21:17.947 } 00:21:17.947 ]' 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.947 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.204 14:19:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:19.577 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.835 14:19:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.769 00:21:20.769 14:19:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.769 14:19:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.769 14:19:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.027 { 00:21:21.027 "cntlid": 145, 00:21:21.027 "qid": 0, 00:21:21.027 "state": "enabled", 00:21:21.027 "listen_address": { 00:21:21.027 "trtype": "RDMA", 00:21:21.027 "adrfam": "IPv4", 00:21:21.027 "traddr": "192.168.100.8", 00:21:21.027 "trsvcid": "4420" 00:21:21.027 }, 00:21:21.027 "peer_address": { 00:21:21.027 "trtype": "RDMA", 00:21:21.027 "adrfam": "IPv4", 00:21:21.027 "traddr": "192.168.100.8", 00:21:21.027 "trsvcid": "46598" 00:21:21.027 }, 00:21:21.027 "auth": { 00:21:21.027 "state": "completed", 00:21:21.027 "digest": "sha512", 00:21:21.027 "dhgroup": "ffdhe8192" 00:21:21.027 } 00:21:21.027 } 00:21:21.027 ]' 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.027 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.285 14:19:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:00:MTJkNWI3YWM1ZWVmZDljNTg2NTY5OTUwMmVmYjU4ODNjYjY0N2UxYjNjZDM0M2Jj57D/bQ==: --dhchap-ctrl-secret DHHC-1:03:MzE3MjY2ZDlkM2I2YTM2YjMyN2VhNmE1OGYyMTFmNWZhM2I5MTUxN2IyMmM5ZWExNzcwNWJhNWI2MjVhMDcxNP858Ys=: 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.657 14:19:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:54.715 request: 00:21:54.715 { 00:21:54.715 "name": "nvme0", 00:21:54.715 "trtype": "rdma", 00:21:54.715 "traddr": "192.168.100.8", 00:21:54.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911", 00:21:54.715 "adrfam": "ipv4", 00:21:54.715 "trsvcid": "4420", 00:21:54.715 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.715 "dhchap_key": "key2", 00:21:54.715 "method": "bdev_nvme_attach_controller", 00:21:54.715 "req_id": 1 00:21:54.715 } 00:21:54.715 Got JSON-RPC error response 00:21:54.715 response: 00:21:54.715 { 00:21:54.715 "code": -5, 00:21:54.715 "message": "Input/output error" 00:21:54.715 } 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.715 14:20:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.715 request: 00:21:54.715 { 00:21:54.715 "name": "nvme0", 00:21:54.715 "trtype": "rdma", 00:21:54.715 "traddr": "192.168.100.8", 00:21:54.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911", 00:21:54.716 "adrfam": "ipv4", 00:21:54.716 "trsvcid": "4420", 00:21:54.716 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.716 "dhchap_key": "key1", 00:21:54.716 "dhchap_ctrlr_key": "ckey2", 00:21:54.716 "method": "bdev_nvme_attach_controller", 00:21:54.716 "req_id": 1 00:21:54.716 } 00:21:54.716 Got JSON-RPC error response 00:21:54.716 response: 00:21:54.716 { 00:21:54.716 "code": -5, 00:21:54.716 "message": "Input/output error" 00:21:54.716 } 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key1 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.716 14:20:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.820 request: 00:22:26.820 { 00:22:26.820 "name": "nvme0", 00:22:26.820 "trtype": "rdma", 00:22:26.820 "traddr": "192.168.100.8", 00:22:26.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911", 00:22:26.820 "adrfam": "ipv4", 00:22:26.820 "trsvcid": "4420", 00:22:26.820 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.820 "dhchap_key": "key1", 00:22:26.820 "dhchap_ctrlr_key": "ckey1", 00:22:26.820 "method": "bdev_nvme_attach_controller", 00:22:26.820 "req_id": 1 00:22:26.820 } 00:22:26.820 Got JSON-RPC error response 00:22:26.820 response: 00:22:26.820 { 00:22:26.820 "code": -5, 00:22:26.820 "message": "Input/output error" 00:22:26.820 } 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 112661 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 112661 ']' 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 112661 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:26.820 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112661 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112661' 00:22:26.821 killing process with pid 112661 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 112661 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 112661 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=143558 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 143558 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 143558 ']' 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.821 14:20:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 143558 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 143558 ']' 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.821 14:20:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.386 00:22:27.386 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.386 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.386 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.644 { 00:22:27.644 "cntlid": 1, 00:22:27.644 "qid": 0, 00:22:27.644 "state": "enabled", 00:22:27.644 "listen_address": { 00:22:27.644 "trtype": "RDMA", 00:22:27.644 "adrfam": "IPv4", 00:22:27.644 "traddr": "192.168.100.8", 00:22:27.644 "trsvcid": "4420" 00:22:27.644 }, 00:22:27.644 "peer_address": { 00:22:27.644 "trtype": "RDMA", 00:22:27.644 "adrfam": "IPv4", 00:22:27.644 "traddr": "192.168.100.8", 00:22:27.644 "trsvcid": "56813" 00:22:27.644 }, 00:22:27.644 "auth": { 00:22:27.644 "state": "completed", 00:22:27.644 "digest": "sha512", 00:22:27.644 "dhgroup": "ffdhe8192" 00:22:27.644 } 00:22:27.644 } 00:22:27.644 ]' 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.644 14:20:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.644 14:20:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.644 14:20:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.901 14:20:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.901 14:20:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.901 14:20:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.158 14:20:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid 6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-secret DHHC-1:03:YjJkYTdlMzNiOWZkZjVkYTM3MjlmYmRkZDE1ZDhiMTQ2M2VhYTg3NTlkNjQ0M2FiZmNmZTJiOTVhZWZiYzg4M2IKApo=: 00:22:29.091 14:20:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --dhchap-key key3 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:29.349 14:20:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:29.606 14:20:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.606 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:29.606 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.607 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:29.607 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.607 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:29.607 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.607 14:20:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:29.607 14:20:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.676 request: 00:23:01.676 { 00:23:01.676 "name": "nvme0", 00:23:01.676 "trtype": "rdma", 00:23:01.676 "traddr": "192.168.100.8", 00:23:01.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911", 00:23:01.676 "adrfam": "ipv4", 00:23:01.676 "trsvcid": "4420", 00:23:01.676 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:01.676 "dhchap_key": "key3", 00:23:01.676 "method": "bdev_nvme_attach_controller", 00:23:01.676 "req_id": 1 00:23:01.676 } 00:23:01.676 Got JSON-RPC error response 00:23:01.676 response: 00:23:01.676 { 00:23:01.676 "code": -5, 00:23:01.676 "message": "Input/output error" 00:23:01.676 } 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.676 14:21:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:33.763 request: 00:23:33.763 { 00:23:33.763 "name": "nvme0", 00:23:33.763 "trtype": "rdma", 00:23:33.763 "traddr": "192.168.100.8", 00:23:33.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911", 00:23:33.763 "adrfam": "ipv4", 00:23:33.763 "trsvcid": "4420", 00:23:33.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.763 "dhchap_key": "key3", 00:23:33.763 "method": "bdev_nvme_attach_controller", 00:23:33.763 "req_id": 1 00:23:33.763 } 00:23:33.763 Got JSON-RPC error response 00:23:33.763 response: 00:23:33.763 { 00:23:33.763 "code": -5, 00:23:33.763 "message": "Input/output error" 00:23:33.763 } 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:33.763 14:21:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:33.763 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:33.763 request: 00:23:33.763 { 00:23:33.763 "name": "nvme0", 00:23:33.763 "trtype": "rdma", 00:23:33.763 "traddr": "192.168.100.8", 00:23:33.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911", 00:23:33.763 "adrfam": "ipv4", 00:23:33.763 "trsvcid": "4420", 00:23:33.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.763 "dhchap_key": "key0", 00:23:33.763 "dhchap_ctrlr_key": "key1", 00:23:33.763 "method": "bdev_nvme_attach_controller", 00:23:33.763 "req_id": 1 00:23:33.763 } 00:23:33.763 Got JSON-RPC error response 00:23:33.763 response: 00:23:33.763 { 00:23:33.763 "code": -5, 00:23:33.764 "message": "Input/output error" 00:23:33.764 } 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:33.764 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.764 14:21:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 112680 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 112680 ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 112680 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112680 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112680' 00:23:33.764 killing process with pid 112680 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 112680 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 112680 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:33.764 rmmod nvme_rdma 00:23:33.764 rmmod nvme_fabrics 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 143558 ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 143558 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 143558 ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 143558 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143558 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143558' 00:23:33.764 killing process with pid 143558 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 143558 00:23:33.764 14:21:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 143558 00:23:33.764 14:22:00 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.764 14:22:00 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:33.764 14:22:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.93s /tmp/spdk.key-sha256.EZJ /tmp/spdk.key-sha384.iTs /tmp/spdk.key-sha512.r5l /tmp/spdk.key-sha512.Z62 /tmp/spdk.key-sha384.Gyj /tmp/spdk.key-sha256.4E9 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:23:33.764 00:23:33.764 real 5m24.305s 00:23:33.764 user 11m56.816s 00:23:33.764 sys 0m19.781s 00:23:33.764 14:22:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:33.764 14:22:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.764 ************************************ 00:23:33.764 END TEST nvmf_auth_target 00:23:33.764 ************************************ 00:23:33.764 14:22:00 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:23:33.764 14:22:00 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:33.764 14:22:00 nvmf_rdma -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:33.764 14:22:00 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:33.764 14:22:00 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:33.764 14:22:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:33.764 ************************************ 00:23:33.764 START TEST nvmf_fuzz 00:23:33.764 ************************************ 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:33.764 * Looking for test storage... 00:23:33.764 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.764 14:22:00 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.765 14:22:00 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:23:35.666 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:23:35.666 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:23:35.666 Found net devices under 0000:81:00.0: mlx_0_0 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.666 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:23:35.667 Found net devices under 0000:81:00.1: mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:35.667 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:35.667 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:23:35.667 altname enp129s0f0np0 00:23:35.667 inet 192.168.100.8/24 scope global mlx_0_0 00:23:35.667 valid_lft forever preferred_lft forever 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:35.667 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:35.667 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:23:35.667 altname enp129s0f1np1 00:23:35.667 inet 192.168.100.9/24 scope global mlx_0_1 00:23:35.667 valid_lft forever preferred_lft forever 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:35.667 192.168.100.9' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:35.667 192.168.100.9' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:35.667 192.168.100.9' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=153876 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 153876 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 153876 ']' 00:23:35.667 14:22:02 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.668 14:22:02 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.668 14:22:02 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.668 14:22:02 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.668 14:22:02 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 Malloc0 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:23:35.926 14:22:03 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:24:07.984 Fuzzing completed. Shutting down the fuzz application 00:24:07.984 00:24:07.984 Dumping successful admin opcodes: 00:24:07.984 8, 9, 10, 24, 00:24:07.984 Dumping successful io opcodes: 00:24:07.984 0, 9, 00:24:07.984 NS: 0x200003af1f00 I/O qp, Total commands completed: 626754, total successful commands: 3653, random_seed: 2217492160 00:24:07.984 NS: 0x200003af1f00 admin qp, Total commands completed: 87280, total successful commands: 697, random_seed: 3528128640 00:24:07.984 14:22:33 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:07.984 Fuzzing completed. Shutting down the fuzz application 00:24:07.984 00:24:07.984 Dumping successful admin opcodes: 00:24:07.984 24, 00:24:07.984 Dumping successful io opcodes: 00:24:07.984 00:24:07.984 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1310692934 00:24:07.984 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1310793365 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:07.984 rmmod nvme_rdma 00:24:07.984 rmmod nvme_fabrics 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 153876 ']' 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 153876 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 153876 ']' 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 153876 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153876 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153876' 00:24:07.984 killing process with pid 153876 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 153876 00:24:07.984 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 153876 00:24:08.243 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.243 14:22:35 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:08.243 14:22:35 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:08.243 00:24:08.243 real 0m35.282s 00:24:08.243 user 0m50.002s 00:24:08.243 sys 0m16.047s 00:24:08.243 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:08.243 14:22:35 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:08.243 ************************************ 00:24:08.243 END TEST nvmf_fuzz 00:24:08.243 ************************************ 00:24:08.243 14:22:35 nvmf_rdma -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:08.243 14:22:35 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:08.243 14:22:35 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:08.243 14:22:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:08.243 ************************************ 00:24:08.243 START TEST nvmf_multiconnection 00:24:08.243 ************************************ 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:08.243 * Looking for test storage... 00:24:08.243 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.243 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:08.244 14:22:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:24:10.776 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:24:10.776 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:24:10.776 Found net devices under 0000:81:00.0: mlx_0_0 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:24:10.776 Found net devices under 0000:81:00.1: mlx_0_1 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.776 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:10.777 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.777 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:24:10.777 altname enp129s0f0np0 00:24:10.777 inet 192.168.100.8/24 scope global mlx_0_0 00:24:10.777 valid_lft forever preferred_lft forever 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:10.777 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:10.777 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:24:10.777 altname enp129s0f1np1 00:24:10.777 inet 192.168.100.9/24 scope global mlx_0_1 00:24:10.777 valid_lft forever preferred_lft forever 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:10.777 14:22:37 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:10.777 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:10.777 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:10.778 192.168.100.9' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:10.778 192.168.100.9' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:10.778 192.168.100.9' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=159609 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 159609 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 159609 ']' 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.778 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:10.778 [2024-07-24 14:22:38.079421] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:24:10.778 [2024-07-24 14:22:38.079510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.778 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.036 [2024-07-24 14:22:38.147960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.036 [2024-07-24 14:22:38.236863] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.036 [2024-07-24 14:22:38.236930] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.036 [2024-07-24 14:22:38.236944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.036 [2024-07-24 14:22:38.236955] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.036 [2024-07-24 14:22:38.236964] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.036 [2024-07-24 14:22:38.237013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.036 [2024-07-24 14:22:38.237072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.036 [2024-07-24 14:22:38.237138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.036 [2024-07-24 14:22:38.237140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.036 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.036 [2024-07-24 14:22:38.405560] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20e49e0/0x20e8ed0) succeed. 00:24:11.294 [2024-07-24 14:22:38.416831] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20e5fd0/0x212a560) succeed. 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 Malloc1 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 [2024-07-24 14:22:38.614041] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 Malloc2 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.294 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.553 Malloc3 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.553 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.553 Malloc4 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 Malloc5 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 Malloc6 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 Malloc7 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.554 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 Malloc8 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 Malloc9 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 Malloc10 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 Malloc11 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:11.814 14:22:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:13.189 14:22:40 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:13.189 14:22:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:13.189 14:22:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:13.189 14:22:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:13.189 14:22:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.089 14:22:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:16.055 14:22:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:16.055 14:22:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:16.055 14:22:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:16.055 14:22:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:16.055 14:22:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:18.581 14:22:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:18.581 14:22:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:18.581 14:22:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:18.581 14:22:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:18.581 14:22:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:18.581 14:22:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:18.582 14:22:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:18.582 14:22:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:19.514 14:22:46 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:19.515 14:22:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:19.515 14:22:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:19.515 14:22:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:19.515 14:22:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:21.410 14:22:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:21.410 14:22:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:21.410 14:22:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:21.410 14:22:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:21.410 14:22:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:21.411 14:22:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:21.411 14:22:48 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.411 14:22:48 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:22.783 14:22:49 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:22.783 14:22:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:22.783 14:22:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.783 14:22:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:22.783 14:22:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:24.681 14:22:51 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:26.053 14:22:53 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:26.053 14:22:53 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:26.053 14:22:53 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.053 14:22:53 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:26.053 14:22:53 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.950 14:22:55 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:24:28.883 14:22:56 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:28.883 14:22:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:28.883 14:22:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:28.884 14:22:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:28.884 14:22:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.409 14:22:58 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:24:32.341 14:22:59 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:32.341 14:22:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:32.341 14:22:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.341 14:22:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:32.341 14:22:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.296 14:23:01 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:24:35.668 14:23:02 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:35.668 14:23:02 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:35.668 14:23:02 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:35.668 14:23:02 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:35.668 14:23:02 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:37.562 14:23:04 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:24:38.495 14:23:05 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:38.495 14:23:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:38.495 14:23:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:38.495 14:23:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:38.495 14:23:05 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:40.393 14:23:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:40.393 14:23:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:40.393 14:23:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:24:40.651 14:23:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:40.651 14:23:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:40.651 14:23:07 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:40.651 14:23:07 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:40.651 14:23:07 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:24:41.584 14:23:08 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:41.584 14:23:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:41.584 14:23:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:41.584 14:23:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:41.584 14:23:08 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.109 14:23:10 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:24:45.041 14:23:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:45.041 14:23:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:45.041 14:23:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.041 14:23:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:45.041 14:23:12 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:46.932 14:23:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:46.932 14:23:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:46.932 14:23:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:24:46.932 14:23:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:46.932 14:23:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.932 14:23:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:46.932 14:23:14 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:46.932 [global] 00:24:46.932 thread=1 00:24:46.932 invalidate=1 00:24:46.932 rw=read 00:24:46.932 time_based=1 00:24:46.932 runtime=10 00:24:46.932 ioengine=libaio 00:24:46.932 direct=1 00:24:46.932 bs=262144 00:24:46.932 iodepth=64 00:24:46.932 norandommap=1 00:24:46.932 numjobs=1 00:24:46.932 00:24:46.932 [job0] 00:24:46.932 filename=/dev/nvme0n1 00:24:46.932 [job1] 00:24:46.932 filename=/dev/nvme10n1 00:24:46.932 [job2] 00:24:46.932 filename=/dev/nvme1n1 00:24:46.932 [job3] 00:24:46.932 filename=/dev/nvme2n1 00:24:46.932 [job4] 00:24:46.932 filename=/dev/nvme3n1 00:24:46.932 [job5] 00:24:46.932 filename=/dev/nvme4n1 00:24:46.932 [job6] 00:24:46.932 filename=/dev/nvme5n1 00:24:46.932 [job7] 00:24:46.932 filename=/dev/nvme6n1 00:24:46.932 [job8] 00:24:46.932 filename=/dev/nvme7n1 00:24:46.932 [job9] 00:24:46.933 filename=/dev/nvme8n1 00:24:46.933 [job10] 00:24:46.933 filename=/dev/nvme9n1 00:24:46.933 Could not set queue depth (nvme0n1) 00:24:46.933 Could not set queue depth (nvme10n1) 00:24:46.933 Could not set queue depth (nvme1n1) 00:24:46.933 Could not set queue depth (nvme2n1) 00:24:46.933 Could not set queue depth (nvme3n1) 00:24:46.933 Could not set queue depth (nvme4n1) 00:24:46.933 Could not set queue depth (nvme5n1) 00:24:46.933 Could not set queue depth (nvme6n1) 00:24:46.933 Could not set queue depth (nvme7n1) 00:24:46.933 Could not set queue depth (nvme8n1) 00:24:46.933 Could not set queue depth (nvme9n1) 00:24:47.190 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:47.190 fio-3.35 00:24:47.190 Starting 11 threads 00:24:59.418 00:24:59.418 job0: (groupid=0, jobs=1): err= 0: pid=164461: Wed Jul 24 14:23:24 2024 00:24:59.418 read: IOPS=1448, BW=362MiB/s (380MB/s)(3645MiB/10064msec) 00:24:59.418 slat (usec): min=12, max=65740, avg=580.03, stdev=2736.16 00:24:59.418 clat (usec): min=250, max=164771, avg=43553.15, stdev=31945.46 00:24:59.418 lat (usec): min=284, max=168884, avg=44133.18, stdev=32454.01 00:24:59.418 clat percentiles (usec): 00:24:59.418 | 1.00th=[ 1778], 5.00th=[ 4752], 10.00th=[ 15008], 20.00th=[ 16712], 00:24:59.418 | 30.00th=[ 18220], 40.00th=[ 24511], 50.00th=[ 33162], 60.00th=[ 42730], 00:24:59.418 | 70.00th=[ 56361], 80.00th=[ 77071], 90.00th=[ 95945], 95.00th=[105382], 00:24:59.418 | 99.00th=[120062], 99.50th=[127402], 99.90th=[141558], 99.95th=[154141], 00:24:59.418 | 99.99th=[164627] 00:24:59.418 bw ( KiB/s): min=151552, max=920064, per=11.54%, avg=371571.30, stdev=199742.66, samples=20 00:24:59.418 iops : min= 592, max= 3594, avg=1451.45, stdev=780.24, samples=20 00:24:59.418 lat (usec) : 500=0.05%, 750=0.01%, 1000=0.08% 00:24:59.418 lat (msec) : 2=1.56%, 4=2.39%, 10=3.91%, 20=27.03%, 50=29.95% 00:24:59.418 lat (msec) : 100=27.96%, 250=7.06% 00:24:59.418 cpu : usr=0.64%, sys=4.69%, ctx=4975, majf=0, minf=4097 00:24:59.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:59.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.418 issued rwts: total=14579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.418 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.418 job1: (groupid=0, jobs=1): err= 0: pid=164462: Wed Jul 24 14:23:24 2024 00:24:59.418 read: IOPS=1487, BW=372MiB/s (390MB/s)(3738MiB/10049msec) 00:24:59.418 slat (usec): min=11, max=61920, avg=611.88, stdev=2281.33 00:24:59.418 clat (usec): min=512, max=137824, avg=42362.09, stdev=28550.43 00:24:59.418 lat (usec): min=556, max=179292, avg=42973.97, stdev=29018.33 00:24:59.418 clat percentiles (usec): 00:24:59.418 | 1.00th=[ 1778], 5.00th=[ 15139], 10.00th=[ 16450], 20.00th=[ 17171], 00:24:59.418 | 30.00th=[ 17957], 40.00th=[ 23200], 50.00th=[ 34866], 60.00th=[ 43779], 00:24:59.418 | 70.00th=[ 55837], 80.00th=[ 71828], 90.00th=[ 86508], 95.00th=[ 96994], 00:24:59.418 | 99.00th=[116917], 99.50th=[124257], 99.90th=[130548], 99.95th=[135267], 00:24:59.418 | 99.99th=[137364] 00:24:59.418 bw ( KiB/s): min=159232, max=957440, per=11.83%, avg=381107.05, stdev=220536.86, samples=20 00:24:59.418 iops : min= 622, max= 3740, avg=1488.65, stdev=861.44, samples=20 00:24:59.418 lat (usec) : 750=0.09%, 1000=0.09% 00:24:59.418 lat (msec) : 2=1.09%, 4=0.98%, 10=1.06%, 20=33.87%, 50=27.23% 00:24:59.418 lat (msec) : 100=31.73%, 250=3.85% 00:24:59.418 cpu : usr=0.51%, sys=4.91%, ctx=3945, majf=0, minf=4097 00:24:59.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:59.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.419 issued rwts: total=14952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.419 job2: (groupid=0, jobs=1): err= 0: pid=164463: Wed Jul 24 14:23:24 2024 00:24:59.419 read: IOPS=894, BW=224MiB/s (234MB/s)(2251MiB/10064msec) 00:24:59.419 slat (usec): min=12, max=63005, avg=935.22, stdev=3793.02 00:24:59.419 clat (usec): min=616, max=189366, avg=70541.85, stdev=28444.55 00:24:59.419 lat (usec): min=718, max=189389, avg=71477.07, stdev=29045.07 00:24:59.419 clat percentiles (msec): 00:24:59.419 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 37], 20.00th=[ 50], 00:24:59.419 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 78], 00:24:59.419 | 70.00th=[ 86], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 124], 00:24:59.419 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 159], 00:24:59.419 | 99.99th=[ 190] 00:24:59.419 bw ( KiB/s): min=132608, max=352768, per=7.10%, avg=228814.65, stdev=63157.05, samples=20 00:24:59.419 iops : min= 518, max= 1378, avg=893.80, stdev=246.71, samples=20 00:24:59.419 lat (usec) : 750=0.02%, 1000=0.01% 00:24:59.419 lat (msec) : 2=0.53%, 4=0.62%, 10=2.08%, 20=1.86%, 50=15.97% 00:24:59.419 lat (msec) : 100=67.05%, 250=11.85% 00:24:59.419 cpu : usr=0.33%, sys=3.04%, ctx=3111, majf=0, minf=4097 00:24:59.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:59.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.419 issued rwts: total=9002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.419 job3: (groupid=0, jobs=1): err= 0: pid=164464: Wed Jul 24 14:23:24 2024 00:24:59.419 read: IOPS=1147, BW=287MiB/s (301MB/s)(2880MiB/10035msec) 00:24:59.419 slat (usec): min=12, max=53386, avg=817.27, stdev=2693.11 00:24:59.419 clat (usec): min=245, max=161857, avg=54882.46, stdev=31724.16 00:24:59.419 lat (usec): min=268, max=162582, avg=55699.73, stdev=32258.21 00:24:59.419 clat percentiles (msec): 00:24:59.419 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 22], 00:24:59.419 | 30.00th=[ 35], 40.00th=[ 41], 50.00th=[ 55], 60.00th=[ 59], 00:24:59.419 | 70.00th=[ 68], 80.00th=[ 83], 90.00th=[ 102], 95.00th=[ 115], 00:24:59.419 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 157], 00:24:59.419 | 99.99th=[ 159] 00:24:59.419 bw ( KiB/s): min=125440, max=739328, per=9.10%, avg=293276.30, stdev=162466.01, samples=20 00:24:59.419 iops : min= 490, max= 2888, avg=1145.60, stdev=634.64, samples=20 00:24:59.419 lat (usec) : 250=0.01%, 500=0.09% 00:24:59.419 lat (msec) : 2=0.18%, 4=0.56%, 10=1.83%, 20=16.14%, 50=28.09% 00:24:59.419 lat (msec) : 100=42.53%, 250=10.57% 00:24:59.419 cpu : usr=0.40%, sys=3.49%, ctx=2802, majf=0, minf=4097 00:24:59.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:59.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.419 issued rwts: total=11520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.419 job4: (groupid=0, jobs=1): err= 0: pid=164465: Wed Jul 24 14:23:24 2024 00:24:59.419 read: IOPS=1024, BW=256MiB/s (269MB/s)(2569MiB/10031msec) 00:24:59.419 slat (usec): min=12, max=90689, avg=768.62, stdev=3814.92 00:24:59.419 clat (usec): min=921, max=215056, avg=61638.68, stdev=31521.38 00:24:59.419 lat (usec): min=976, max=217224, avg=62407.30, stdev=32233.96 00:24:59.419 clat percentiles (msec): 00:24:59.419 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 34], 00:24:59.419 | 30.00th=[ 43], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 66], 00:24:59.419 | 70.00th=[ 80], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 121], 00:24:59.419 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 205], 00:24:59.419 | 99.99th=[ 215] 00:24:59.419 bw ( KiB/s): min=126464, max=393728, per=8.12%, avg=261451.50, stdev=74031.76, samples=20 00:24:59.419 iops : min= 494, max= 1538, avg=1021.25, stdev=289.18, samples=20 00:24:59.419 lat (usec) : 1000=0.02% 00:24:59.419 lat (msec) : 2=0.26%, 4=0.50%, 10=0.97%, 20=7.66%, 50=28.88% 00:24:59.419 lat (msec) : 100=47.97%, 250=13.74% 00:24:59.419 cpu : usr=0.37%, sys=3.71%, ctx=4488, majf=0, minf=4097 00:24:59.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:59.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.419 issued rwts: total=10277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.419 job5: (groupid=0, jobs=1): err= 0: pid=164466: Wed Jul 24 14:23:24 2024 00:24:59.419 read: IOPS=1239, BW=310MiB/s (325MB/s)(3117MiB/10062msec) 00:24:59.419 slat (usec): min=12, max=54784, avg=698.09, stdev=2525.85 00:24:59.419 clat (usec): min=784, max=171499, avg=50884.35, stdev=26461.31 00:24:59.419 lat (usec): min=839, max=179691, avg=51582.43, stdev=26902.28 00:24:59.419 clat percentiles (msec): 00:24:59.419 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 20], 20.00th=[ 24], 00:24:59.419 | 30.00th=[ 33], 40.00th=[ 39], 50.00th=[ 51], 60.00th=[ 57], 00:24:59.419 | 70.00th=[ 67], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 95], 00:24:59.419 | 99.00th=[ 121], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 136], 00:24:59.419 | 99.99th=[ 140] 00:24:59.419 bw ( KiB/s): min=172544, max=615936, per=9.86%, avg=317616.55, stdev=141033.42, samples=20 00:24:59.419 iops : min= 674, max= 2406, avg=1240.65, stdev=550.83, samples=20 00:24:59.419 lat (usec) : 1000=0.02% 00:24:59.419 lat (msec) : 2=0.24%, 4=0.87%, 10=0.75%, 20=13.28%, 50=34.46% 00:24:59.419 lat (msec) : 100=47.76%, 250=2.61% 00:24:59.419 cpu : usr=0.56%, sys=4.02%, ctx=3695, majf=0, minf=4097 00:24:59.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:59.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.419 issued rwts: total=12469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.419 job6: (groupid=0, jobs=1): err= 0: pid=164467: Wed Jul 24 14:23:24 2024 00:24:59.419 read: IOPS=953, BW=238MiB/s (250MB/s)(2398MiB/10064msec) 00:24:59.419 slat (usec): min=13, max=64889, avg=906.00, stdev=3571.12 00:24:59.419 clat (usec): min=1085, max=184273, avg=66181.71, stdev=30156.34 00:24:59.419 lat (usec): min=1132, max=187346, avg=67087.71, stdev=30749.16 00:24:59.419 clat percentiles (usec): 00:24:59.419 | 1.00th=[ 1614], 5.00th=[ 14877], 10.00th=[ 27919], 20.00th=[ 39060], 00:24:59.419 | 30.00th=[ 54789], 40.00th=[ 57934], 50.00th=[ 62653], 60.00th=[ 73925], 00:24:59.419 | 70.00th=[ 81265], 80.00th=[ 91751], 90.00th=[104334], 95.00th=[122160], 00:24:59.419 | 99.00th=[131597], 99.50th=[135267], 99.90th=[160433], 99.95th=[177210], 00:24:59.419 | 99.99th=[183501] 00:24:59.419 bw ( KiB/s): min=140288, max=448000, per=7.57%, avg=243919.80, stdev=85779.77, samples=20 00:24:59.419 iops : min= 548, max= 1750, avg=952.80, stdev=335.08, samples=20 00:24:59.419 lat (msec) : 2=1.32%, 4=1.30%, 10=1.04%, 20=3.83%, 50=19.12% 00:24:59.419 lat (msec) : 100=60.54%, 250=12.84% 00:24:59.419 cpu : usr=0.34%, sys=3.36%, ctx=3330, majf=0, minf=3721 00:24:59.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:59.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.419 issued rwts: total=9592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.419 job7: (groupid=0, jobs=1): err= 0: pid=164468: Wed Jul 24 14:23:24 2024 00:24:59.419 read: IOPS=1015, BW=254MiB/s (266MB/s)(2547MiB/10031msec) 00:24:59.419 slat (usec): min=12, max=38412, avg=812.23, stdev=2728.95 00:24:59.419 clat (usec): min=451, max=162230, avg=62133.62, stdev=31738.63 00:24:59.419 lat (usec): min=473, max=162258, avg=62945.84, stdev=32252.55 00:24:59.419 clat percentiles (usec): 00:24:59.419 | 1.00th=[ 1663], 5.00th=[ 7635], 10.00th=[ 23462], 20.00th=[ 34866], 00:24:59.419 | 30.00th=[ 41157], 40.00th=[ 51119], 50.00th=[ 58459], 60.00th=[ 69731], 00:24:59.419 | 70.00th=[ 78119], 80.00th=[ 90702], 90.00th=[106431], 95.00th=[121111], 00:24:59.419 | 99.00th=[132645], 99.50th=[137364], 99.90th=[156238], 99.95th=[156238], 00:24:59.419 | 99.99th=[158335] 00:24:59.419 bw ( KiB/s): min=141824, max=593920, per=8.05%, avg=259197.80, stdev=105933.38, samples=20 00:24:59.419 iops : min= 554, max= 2320, avg=1012.45, stdev=413.79, samples=20 00:24:59.419 lat (usec) : 500=0.02%, 750=0.26%, 1000=0.09% 00:24:59.419 lat (msec) : 2=1.22%, 4=2.42%, 10=1.34%, 20=3.27%, 50=29.13% 00:24:59.419 lat (msec) : 100=48.82%, 250=13.44% 00:24:59.419 cpu : usr=0.33%, sys=3.59%, ctx=3732, majf=0, minf=4097 00:24:59.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:59.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.419 issued rwts: total=10189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.419 job8: (groupid=0, jobs=1): err= 0: pid=164469: Wed Jul 24 14:23:24 2024 00:24:59.419 read: IOPS=1143, BW=286MiB/s (300MB/s)(2872MiB/10047msec) 00:24:59.419 slat (usec): min=13, max=59167, avg=684.03, stdev=2937.17 00:24:59.419 clat (usec): min=742, max=165034, avg=55245.88, stdev=33141.49 00:24:59.419 lat (usec): min=777, max=166737, avg=55929.91, stdev=33727.38 00:24:59.419 clat percentiles (msec): 00:24:59.419 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 17], 20.00th=[ 21], 00:24:59.419 | 30.00th=[ 35], 40.00th=[ 42], 50.00th=[ 50], 60.00th=[ 57], 00:24:59.419 | 70.00th=[ 73], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 112], 00:24:59.419 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:24:59.419 | 99.99th=[ 161] 00:24:59.420 bw ( KiB/s): min=125440, max=737792, per=9.08%, avg=292406.45, stdev=138540.02, samples=20 00:24:59.420 iops : min= 490, max= 2882, avg=1142.20, stdev=541.16, samples=20 00:24:59.420 lat (usec) : 750=0.01%, 1000=0.01% 00:24:59.420 lat (msec) : 2=0.44%, 4=0.92%, 10=2.84%, 20=15.12%, 50=31.70% 00:24:59.420 lat (msec) : 100=36.58%, 250=12.37% 00:24:59.420 cpu : usr=0.43%, sys=4.12%, ctx=4607, majf=0, minf=4097 00:24:59.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:59.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.420 issued rwts: total=11486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.420 job9: (groupid=0, jobs=1): err= 0: pid=164470: Wed Jul 24 14:23:24 2024 00:24:59.420 read: IOPS=1092, BW=273MiB/s (286MB/s)(2743MiB/10048msec) 00:24:59.420 slat (usec): min=12, max=55472, avg=803.14, stdev=2990.74 00:24:59.420 clat (usec): min=616, max=155659, avg=57745.74, stdev=27955.07 00:24:59.420 lat (usec): min=698, max=159109, avg=58548.89, stdev=28521.45 00:24:59.420 clat percentiles (msec): 00:24:59.420 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 28], 20.00th=[ 36], 00:24:59.420 | 30.00th=[ 41], 40.00th=[ 50], 50.00th=[ 53], 60.00th=[ 59], 00:24:59.420 | 70.00th=[ 73], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 108], 00:24:59.420 | 99.00th=[ 122], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 140], 00:24:59.420 | 99.99th=[ 146] 00:24:59.420 bw ( KiB/s): min=155136, max=587776, per=8.67%, avg=279263.95, stdev=110478.69, samples=20 00:24:59.420 iops : min= 606, max= 2296, avg=1090.85, stdev=431.55, samples=20 00:24:59.420 lat (usec) : 750=0.01%, 1000=0.04% 00:24:59.420 lat (msec) : 2=0.48%, 4=1.24%, 10=3.96%, 20=2.23%, 50=33.83% 00:24:59.420 lat (msec) : 100=48.90%, 250=9.30% 00:24:59.420 cpu : usr=0.41%, sys=3.77%, ctx=3665, majf=0, minf=4097 00:24:59.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:24:59.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.420 issued rwts: total=10973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.420 job10: (groupid=0, jobs=1): err= 0: pid=164471: Wed Jul 24 14:23:24 2024 00:24:59.420 read: IOPS=1155, BW=289MiB/s (303MB/s)(2898MiB/10035msec) 00:24:59.420 slat (usec): min=12, max=58203, avg=706.17, stdev=3030.89 00:24:59.420 clat (usec): min=607, max=178226, avg=54626.22, stdev=31756.55 00:24:59.420 lat (usec): min=643, max=178253, avg=55332.39, stdev=32326.26 00:24:59.420 clat percentiles (msec): 00:24:59.420 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 20], 00:24:59.420 | 30.00th=[ 31], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 62], 00:24:59.420 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 95], 95.00th=[ 117], 00:24:59.420 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 161], 99.95th=[ 165], 00:24:59.420 | 99.99th=[ 178] 00:24:59.420 bw ( KiB/s): min=126976, max=771584, per=9.16%, avg=295114.85, stdev=164019.97, samples=20 00:24:59.420 iops : min= 496, max= 3014, avg=1152.75, stdev=640.71, samples=20 00:24:59.420 lat (usec) : 750=0.04%, 1000=0.02% 00:24:59.420 lat (msec) : 2=0.55%, 4=1.03%, 10=3.09%, 20=18.90%, 50=17.24% 00:24:59.420 lat (msec) : 100=52.07%, 250=7.07% 00:24:59.420 cpu : usr=0.37%, sys=4.05%, ctx=4385, majf=0, minf=4097 00:24:59.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:24:59.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:59.420 issued rwts: total=11592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:59.420 00:24:59.420 Run status group 0 (all jobs): 00:24:59.420 READ: bw=3146MiB/s (3298MB/s), 224MiB/s-372MiB/s (234MB/s-390MB/s), io=30.9GiB (33.2GB), run=10031-10064msec 00:24:59.420 00:24:59.420 Disk stats (read/write): 00:24:59.420 nvme0n1: ios=28983/0, merge=0/0, ticks=1233368/0, in_queue=1233368, util=97.47% 00:24:59.420 nvme10n1: ios=29688/0, merge=0/0, ticks=1231486/0, in_queue=1231486, util=97.66% 00:24:59.420 nvme1n1: ios=17831/0, merge=0/0, ticks=1233837/0, in_queue=1233837, util=97.90% 00:24:59.420 nvme2n1: ios=22802/0, merge=0/0, ticks=1231536/0, in_queue=1231536, util=98.02% 00:24:59.420 nvme3n1: ios=20313/0, merge=0/0, ticks=1238973/0, in_queue=1238973, util=98.07% 00:24:59.420 nvme4n1: ios=24766/0, merge=0/0, ticks=1231697/0, in_queue=1231697, util=98.35% 00:24:59.420 nvme5n1: ios=19015/0, merge=0/0, ticks=1231593/0, in_queue=1231593, util=98.50% 00:24:59.420 nvme6n1: ios=20142/0, merge=0/0, ticks=1235257/0, in_queue=1235257, util=98.60% 00:24:59.420 nvme7n1: ios=22793/0, merge=0/0, ticks=1236704/0, in_queue=1236704, util=98.92% 00:24:59.420 nvme8n1: ios=21740/0, merge=0/0, ticks=1232330/0, in_queue=1232330, util=99.10% 00:24:59.420 nvme9n1: ios=22960/0, merge=0/0, ticks=1235989/0, in_queue=1235989, util=99.20% 00:24:59.420 14:23:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:59.420 [global] 00:24:59.420 thread=1 00:24:59.420 invalidate=1 00:24:59.420 rw=randwrite 00:24:59.420 time_based=1 00:24:59.420 runtime=10 00:24:59.420 ioengine=libaio 00:24:59.420 direct=1 00:24:59.420 bs=262144 00:24:59.420 iodepth=64 00:24:59.420 norandommap=1 00:24:59.420 numjobs=1 00:24:59.420 00:24:59.420 [job0] 00:24:59.420 filename=/dev/nvme0n1 00:24:59.420 [job1] 00:24:59.420 filename=/dev/nvme10n1 00:24:59.420 [job2] 00:24:59.420 filename=/dev/nvme1n1 00:24:59.420 [job3] 00:24:59.420 filename=/dev/nvme2n1 00:24:59.420 [job4] 00:24:59.420 filename=/dev/nvme3n1 00:24:59.420 [job5] 00:24:59.420 filename=/dev/nvme4n1 00:24:59.420 [job6] 00:24:59.420 filename=/dev/nvme5n1 00:24:59.420 [job7] 00:24:59.420 filename=/dev/nvme6n1 00:24:59.420 [job8] 00:24:59.420 filename=/dev/nvme7n1 00:24:59.420 [job9] 00:24:59.420 filename=/dev/nvme8n1 00:24:59.420 [job10] 00:24:59.420 filename=/dev/nvme9n1 00:24:59.420 Could not set queue depth (nvme0n1) 00:24:59.420 Could not set queue depth (nvme10n1) 00:24:59.420 Could not set queue depth (nvme1n1) 00:24:59.420 Could not set queue depth (nvme2n1) 00:24:59.420 Could not set queue depth (nvme3n1) 00:24:59.420 Could not set queue depth (nvme4n1) 00:24:59.420 Could not set queue depth (nvme5n1) 00:24:59.420 Could not set queue depth (nvme6n1) 00:24:59.420 Could not set queue depth (nvme7n1) 00:24:59.420 Could not set queue depth (nvme8n1) 00:24:59.420 Could not set queue depth (nvme9n1) 00:24:59.420 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:59.420 fio-3.35 00:24:59.420 Starting 11 threads 00:25:09.390 00:25:09.390 job0: (groupid=0, jobs=1): err= 0: pid=165637: Wed Jul 24 14:23:35 2024 00:25:09.390 write: IOPS=893, BW=223MiB/s (234MB/s)(2257MiB/10104msec); 0 zone resets 00:25:09.390 slat (usec): min=22, max=54093, avg=1022.68, stdev=2570.89 00:25:09.390 clat (usec): min=329, max=223643, avg=70568.88, stdev=34267.23 00:25:09.390 lat (usec): min=372, max=223711, avg=71591.57, stdev=34758.38 00:25:09.390 clat percentiles (msec): 00:25:09.390 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 17], 20.00th=[ 43], 00:25:09.390 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 75], 60.00th=[ 82], 00:25:09.390 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 112], 95.00th=[ 128], 00:25:09.390 | 99.00th=[ 155], 99.50th=[ 184], 99.90th=[ 218], 99.95th=[ 218], 00:25:09.390 | 99.99th=[ 224] 00:25:09.390 bw ( KiB/s): min=121856, max=611840, per=7.93%, avg=229443.65, stdev=101648.13, samples=20 00:25:09.390 iops : min= 476, max= 2390, avg=896.20, stdev=397.10, samples=20 00:25:09.390 lat (usec) : 500=0.06%, 750=0.16%, 1000=0.13% 00:25:09.390 lat (msec) : 2=0.63%, 4=0.72%, 10=4.53%, 20=5.12%, 50=13.53% 00:25:09.390 lat (msec) : 100=59.75%, 250=15.38% 00:25:09.390 cpu : usr=2.59%, sys=3.58%, ctx=2603, majf=0, minf=1 00:25:09.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:09.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.390 issued rwts: total=0,9027,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.390 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.390 job1: (groupid=0, jobs=1): err= 0: pid=165649: Wed Jul 24 14:23:35 2024 00:25:09.390 write: IOPS=958, BW=240MiB/s (251MB/s)(2420MiB/10102msec); 0 zone resets 00:25:09.390 slat (usec): min=27, max=109652, avg=878.12, stdev=2788.81 00:25:09.390 clat (usec): min=1469, max=231324, avg=65864.00, stdev=33112.91 00:25:09.390 lat (usec): min=1536, max=267533, avg=66742.12, stdev=33557.02 00:25:09.390 clat percentiles (msec): 00:25:09.390 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 42], 00:25:09.390 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 57], 60.00th=[ 66], 00:25:09.390 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 112], 95.00th=[ 126], 00:25:09.390 | 99.00th=[ 165], 99.50th=[ 194], 99.90th=[ 220], 99.95th=[ 222], 00:25:09.390 | 99.99th=[ 232] 00:25:09.390 bw ( KiB/s): min=116224, max=420864, per=8.51%, avg=246194.25, stdev=89286.93, samples=20 00:25:09.390 iops : min= 454, max= 1644, avg=961.60, stdev=348.80, samples=20 00:25:09.390 lat (msec) : 2=0.06%, 4=0.11%, 10=0.92%, 20=1.68%, 50=40.59% 00:25:09.390 lat (msec) : 100=38.72%, 250=17.91% 00:25:09.390 cpu : usr=2.81%, sys=3.75%, ctx=2822, majf=0, minf=1 00:25:09.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:09.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.390 issued rwts: total=0,9681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.390 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.390 job2: (groupid=0, jobs=1): err= 0: pid=165650: Wed Jul 24 14:23:35 2024 00:25:09.390 write: IOPS=1111, BW=278MiB/s (291MB/s)(2797MiB/10061msec); 0 zone resets 00:25:09.390 slat (usec): min=22, max=52514, avg=775.17, stdev=2235.50 00:25:09.390 clat (usec): min=929, max=212521, avg=56753.02, stdev=36471.28 00:25:09.390 lat (usec): min=1000, max=212623, avg=57528.20, stdev=37018.48 00:25:09.390 clat percentiles (msec): 00:25:09.390 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 22], 00:25:09.390 | 30.00th=[ 23], 40.00th=[ 43], 50.00th=[ 50], 60.00th=[ 64], 00:25:09.390 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 110], 95.00th=[ 124], 00:25:09.390 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 184], 00:25:09.390 | 99.99th=[ 207] 00:25:09.390 bw ( KiB/s): min=113664, max=768000, per=9.84%, avg=284706.75, stdev=155455.10, samples=20 00:25:09.390 iops : min= 444, max= 3000, avg=1112.10, stdev=607.27, samples=20 00:25:09.390 lat (usec) : 1000=0.01% 00:25:09.390 lat (msec) : 2=0.38%, 4=0.72%, 10=2.53%, 20=6.67%, 50=40.38% 00:25:09.390 lat (msec) : 100=34.17%, 250=15.15% 00:25:09.390 cpu : usr=3.04%, sys=4.37%, ctx=3139, majf=0, minf=1 00:25:09.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:09.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.390 issued rwts: total=0,11186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.390 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.390 job3: (groupid=0, jobs=1): err= 0: pid=165651: Wed Jul 24 14:23:35 2024 00:25:09.390 write: IOPS=815, BW=204MiB/s (214MB/s)(2061MiB/10114msec); 0 zone resets 00:25:09.390 slat (usec): min=21, max=81818, avg=977.32, stdev=2858.73 00:25:09.390 clat (usec): min=1434, max=236591, avg=77497.62, stdev=31344.68 00:25:09.390 lat (usec): min=1485, max=236648, avg=78474.94, stdev=31755.59 00:25:09.390 clat percentiles (msec): 00:25:09.390 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 36], 20.00th=[ 59], 00:25:09.390 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 84], 00:25:09.390 | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 114], 95.00th=[ 136], 00:25:09.390 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 232], 99.95th=[ 236], 00:25:09.390 | 99.99th=[ 236] 00:25:09.390 bw ( KiB/s): min=139264, max=293376, per=7.24%, avg=209365.60, stdev=38268.92, samples=20 00:25:09.390 iops : min= 544, max= 1146, avg=817.80, stdev=149.49, samples=20 00:25:09.390 lat (msec) : 2=0.08%, 4=0.29%, 10=0.82%, 20=3.72%, 50=12.36% 00:25:09.390 lat (msec) : 100=64.25%, 250=18.46% 00:25:09.390 cpu : usr=2.28%, sys=3.11%, ctx=2589, majf=0, minf=1 00:25:09.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:09.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.390 issued rwts: total=0,8243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.390 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.390 job4: (groupid=0, jobs=1): err= 0: pid=165652: Wed Jul 24 14:23:35 2024 00:25:09.391 write: IOPS=1266, BW=317MiB/s (332MB/s)(3199MiB/10105msec); 0 zone resets 00:25:09.391 slat (usec): min=22, max=67325, avg=684.74, stdev=1947.98 00:25:09.391 clat (usec): min=715, max=236416, avg=49828.86, stdev=30737.16 00:25:09.391 lat (usec): min=773, max=236460, avg=50513.60, stdev=31131.63 00:25:09.391 clat percentiles (msec): 00:25:09.391 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 22], 00:25:09.391 | 30.00th=[ 23], 40.00th=[ 40], 50.00th=[ 45], 60.00th=[ 52], 00:25:09.391 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 89], 95.00th=[ 112], 00:25:09.391 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 218], 99.95th=[ 228], 00:25:09.391 | 99.99th=[ 236] 00:25:09.391 bw ( KiB/s): min=155136, max=763912, per=11.26%, avg=325843.05, stdev=159591.25, samples=20 00:25:09.391 iops : min= 606, max= 2984, avg=1272.80, stdev=623.42, samples=20 00:25:09.391 lat (usec) : 750=0.01%, 1000=0.03% 00:25:09.391 lat (msec) : 2=0.22%, 4=0.20%, 10=0.91%, 20=6.47%, 50=50.92% 00:25:09.391 lat (msec) : 100=34.57%, 250=6.68% 00:25:09.391 cpu : usr=3.59%, sys=4.64%, ctx=3402, majf=0, minf=1 00:25:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.391 issued rwts: total=0,12795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.391 job5: (groupid=0, jobs=1): err= 0: pid=165653: Wed Jul 24 14:23:35 2024 00:25:09.391 write: IOPS=912, BW=228MiB/s (239MB/s)(2295MiB/10064msec); 0 zone resets 00:25:09.391 slat (usec): min=27, max=66373, avg=977.11, stdev=2537.36 00:25:09.391 clat (usec): min=524, max=200132, avg=69136.69, stdev=32264.31 00:25:09.391 lat (usec): min=568, max=218957, avg=70113.80, stdev=32773.84 00:25:09.391 clat percentiles (msec): 00:25:09.391 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 41], 20.00th=[ 44], 00:25:09.391 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 63], 60.00th=[ 80], 00:25:09.391 | 70.00th=[ 86], 80.00th=[ 92], 90.00th=[ 116], 95.00th=[ 129], 00:25:09.391 | 99.00th=[ 161], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 197], 00:25:09.391 | 99.99th=[ 201] 00:25:09.391 bw ( KiB/s): min=110592, max=370688, per=8.07%, avg=233343.95, stdev=77143.35, samples=20 00:25:09.391 iops : min= 432, max= 1448, avg=911.45, stdev=301.35, samples=20 00:25:09.391 lat (usec) : 750=0.05%, 1000=0.03% 00:25:09.391 lat (msec) : 2=0.09%, 4=0.16%, 10=1.42%, 20=2.19%, 50=37.17% 00:25:09.391 lat (msec) : 100=42.54%, 250=16.34% 00:25:09.391 cpu : usr=2.64%, sys=3.50%, ctx=2590, majf=0, minf=1 00:25:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.391 issued rwts: total=0,9181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.391 job6: (groupid=0, jobs=1): err= 0: pid=165661: Wed Jul 24 14:23:35 2024 00:25:09.391 write: IOPS=1085, BW=271MiB/s (284MB/s)(2728MiB/10056msec); 0 zone resets 00:25:09.391 slat (usec): min=23, max=61096, avg=688.16, stdev=2205.88 00:25:09.391 clat (usec): min=903, max=211634, avg=58261.26, stdev=35489.81 00:25:09.391 lat (usec): min=956, max=211688, avg=58949.41, stdev=35985.75 00:25:09.391 clat percentiles (msec): 00:25:09.391 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 21], 20.00th=[ 22], 00:25:09.391 | 30.00th=[ 30], 40.00th=[ 42], 50.00th=[ 53], 60.00th=[ 66], 00:25:09.391 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 118], 00:25:09.391 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 194], 99.95th=[ 199], 00:25:09.391 | 99.99th=[ 211] 00:25:09.391 bw ( KiB/s): min=141312, max=655360, per=9.60%, avg=277715.75, stdev=142143.83, samples=20 00:25:09.391 iops : min= 552, max= 2560, avg=1084.75, stdev=555.30, samples=20 00:25:09.391 lat (usec) : 1000=0.01% 00:25:09.391 lat (msec) : 2=0.54%, 4=0.47%, 10=2.27%, 20=6.25%, 50=38.99% 00:25:09.391 lat (msec) : 100=37.84%, 250=13.63% 00:25:09.391 cpu : usr=3.26%, sys=3.82%, ctx=3555, majf=0, minf=1 00:25:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.391 issued rwts: total=0,10912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.391 job7: (groupid=0, jobs=1): err= 0: pid=165669: Wed Jul 24 14:23:35 2024 00:25:09.391 write: IOPS=1210, BW=303MiB/s (317MB/s)(3045MiB/10063msec); 0 zone resets 00:25:09.391 slat (usec): min=27, max=40610, avg=792.34, stdev=1620.29 00:25:09.391 clat (msec): min=6, max=134, avg=52.04, stdev=22.29 00:25:09.391 lat (msec): min=6, max=134, avg=52.84, stdev=22.59 00:25:09.391 clat percentiles (msec): 00:25:09.391 | 1.00th=[ 19], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 41], 00:25:09.391 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 47], 00:25:09.391 | 70.00th=[ 58], 80.00th=[ 68], 90.00th=[ 88], 95.00th=[ 92], 00:25:09.391 | 99.00th=[ 120], 99.50th=[ 124], 99.90th=[ 128], 99.95th=[ 131], 00:25:09.391 | 99.99th=[ 134] 00:25:09.391 bw ( KiB/s): min=160768, max=521216, per=10.72%, avg=310145.95, stdev=99890.04, samples=20 00:25:09.391 iops : min= 628, max= 2036, avg=1211.50, stdev=390.19, samples=20 00:25:09.391 lat (msec) : 10=0.06%, 20=3.16%, 50=60.68%, 100=32.15%, 250=3.95% 00:25:09.391 cpu : usr=3.60%, sys=4.51%, ctx=3073, majf=0, minf=1 00:25:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.391 issued rwts: total=0,12181,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.391 job8: (groupid=0, jobs=1): err= 0: pid=165694: Wed Jul 24 14:23:35 2024 00:25:09.391 write: IOPS=985, BW=246MiB/s (258MB/s)(2491MiB/10113msec); 0 zone resets 00:25:09.391 slat (usec): min=27, max=62933, avg=868.26, stdev=2696.93 00:25:09.391 clat (usec): min=681, max=271135, avg=64038.09, stdev=40691.65 00:25:09.391 lat (usec): min=720, max=271192, avg=64906.35, stdev=41246.55 00:25:09.391 clat percentiles (msec): 00:25:09.391 | 1.00th=[ 8], 5.00th=[ 20], 10.00th=[ 21], 20.00th=[ 22], 00:25:09.391 | 30.00th=[ 23], 40.00th=[ 43], 50.00th=[ 65], 60.00th=[ 83], 00:25:09.391 | 70.00th=[ 89], 80.00th=[ 102], 90.00th=[ 113], 95.00th=[ 130], 00:25:09.391 | 99.00th=[ 163], 99.50th=[ 188], 99.90th=[ 257], 99.95th=[ 271], 00:25:09.391 | 99.99th=[ 271] 00:25:09.391 bw ( KiB/s): min=135168, max=758802, per=8.76%, avg=253461.25, stdev=161068.87, samples=20 00:25:09.391 iops : min= 528, max= 2964, avg=990.00, stdev=629.06, samples=20 00:25:09.391 lat (usec) : 750=0.01%, 1000=0.03% 00:25:09.391 lat (msec) : 2=0.02%, 4=0.19%, 10=1.78%, 20=4.78%, 50=36.01% 00:25:09.391 lat (msec) : 100=36.42%, 250=20.66%, 500=0.11% 00:25:09.391 cpu : usr=2.91%, sys=3.81%, ctx=2736, majf=0, minf=1 00:25:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.391 issued rwts: total=0,9965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.391 job9: (groupid=0, jobs=1): err= 0: pid=165720: Wed Jul 24 14:23:35 2024 00:25:09.391 write: IOPS=1078, BW=270MiB/s (283MB/s)(2726MiB/10107msec); 0 zone resets 00:25:09.391 slat (usec): min=25, max=93373, avg=787.34, stdev=2370.16 00:25:09.391 clat (usec): min=631, max=241323, avg=58491.15, stdev=37521.19 00:25:09.391 lat (usec): min=705, max=241446, avg=59278.49, stdev=38013.96 00:25:09.391 clat percentiles (msec): 00:25:09.391 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 19], 20.00th=[ 22], 00:25:09.391 | 30.00th=[ 32], 40.00th=[ 43], 50.00th=[ 47], 60.00th=[ 66], 00:25:09.391 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 126], 00:25:09.391 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 224], 99.95th=[ 232], 00:25:09.391 | 99.99th=[ 243] 00:25:09.391 bw ( KiB/s): min=134144, max=651776, per=9.59%, avg=277512.80, stdev=141858.45, samples=20 00:25:09.391 iops : min= 524, max= 2546, avg=1084.00, stdev=554.15, samples=20 00:25:09.391 lat (usec) : 750=0.05%, 1000=0.06% 00:25:09.391 lat (msec) : 2=0.28%, 4=0.76%, 10=2.77%, 20=8.52%, 50=39.26% 00:25:09.391 lat (msec) : 100=34.64%, 250=13.66% 00:25:09.391 cpu : usr=3.29%, sys=3.86%, ctx=3006, majf=0, minf=1 00:25:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.391 issued rwts: total=0,10905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.391 job10: (groupid=0, jobs=1): err= 0: pid=165737: Wed Jul 24 14:23:35 2024 00:25:09.391 write: IOPS=1009, BW=252MiB/s (265MB/s)(2551MiB/10103msec); 0 zone resets 00:25:09.391 slat (usec): min=25, max=57916, avg=858.49, stdev=2330.13 00:25:09.391 clat (usec): min=1004, max=231029, avg=62475.95, stdev=37098.52 00:25:09.391 lat (usec): min=1114, max=232228, avg=63334.44, stdev=37614.97 00:25:09.391 clat percentiles (msec): 00:25:09.391 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 21], 20.00th=[ 23], 00:25:09.391 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 63], 60.00th=[ 69], 00:25:09.391 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 112], 95.00th=[ 131], 00:25:09.391 | 99.00th=[ 163], 99.50th=[ 182], 99.90th=[ 209], 99.95th=[ 218], 00:25:09.391 | 99.99th=[ 232] 00:25:09.391 bw ( KiB/s): min=121856, max=710656, per=8.97%, avg=259571.30, stdev=148244.59, samples=20 00:25:09.391 iops : min= 476, max= 2776, avg=1013.90, stdev=579.11, samples=20 00:25:09.391 lat (msec) : 2=0.28%, 4=0.25%, 10=2.20%, 20=4.48%, 50=35.57% 00:25:09.391 lat (msec) : 100=42.21%, 250=15.01% 00:25:09.391 cpu : usr=3.17%, sys=3.88%, ctx=2939, majf=0, minf=1 00:25:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:09.391 issued rwts: total=0,10203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.391 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:09.391 00:25:09.391 Run status group 0 (all jobs): 00:25:09.392 WRITE: bw=2825MiB/s (2962MB/s), 204MiB/s-317MiB/s (214MB/s-332MB/s), io=27.9GiB (30.0GB), run=10056-10114msec 00:25:09.392 00:25:09.392 Disk stats (read/write): 00:25:09.392 nvme0n1: ios=49/17883, merge=0/0, ticks=17/1225820, in_queue=1225837, util=97.25% 00:25:09.392 nvme10n1: ios=0/19178, merge=0/0, ticks=0/1225578, in_queue=1225578, util=97.35% 00:25:09.392 nvme1n1: ios=0/22118, merge=0/0, ticks=0/1230609, in_queue=1230609, util=97.69% 00:25:09.392 nvme2n1: ios=0/16286, merge=0/0, ticks=0/1226815, in_queue=1226815, util=97.84% 00:25:09.392 nvme3n1: ios=0/25419, merge=0/0, ticks=0/1226718, in_queue=1226718, util=97.89% 00:25:09.392 nvme4n1: ios=0/18134, merge=0/0, ticks=0/1228322, in_queue=1228322, util=98.15% 00:25:09.392 nvme5n1: ios=0/21580, merge=0/0, ticks=0/1231600, in_queue=1231600, util=98.22% 00:25:09.392 nvme6n1: ios=0/24132, merge=0/0, ticks=0/1228709, in_queue=1228709, util=98.39% 00:25:09.392 nvme7n1: ios=0/19738, merge=0/0, ticks=0/1223252, in_queue=1223252, util=98.77% 00:25:09.392 nvme8n1: ios=0/21634, merge=0/0, ticks=0/1226389, in_queue=1226389, util=98.96% 00:25:09.392 nvme9n1: ios=0/20230, merge=0/0, ticks=0/1224558, in_queue=1224558, util=98.99% 00:25:09.392 14:23:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:09.392 14:23:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:09.392 14:23:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.392 14:23:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:09.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.392 14:23:36 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:10.761 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:10.761 14:23:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.762 14:23:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:11.692 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.693 14:23:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:13.064 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.064 14:23:40 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:13.997 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.997 14:23:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:15.368 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.368 14:23:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:16.302 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.302 14:23:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:17.233 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.233 14:23:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:18.604 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.604 14:23:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:19.537 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.537 14:23:46 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:20.907 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:20.907 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:20.908 rmmod nvme_rdma 00:25:20.908 rmmod nvme_fabrics 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 159609 ']' 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 159609 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 159609 ']' 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 159609 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 159609 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 159609' 00:25:20.908 killing process with pid 159609 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 159609 00:25:20.908 14:23:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 159609 00:25:21.497 14:23:48 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:21.497 14:23:48 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:21.497 00:25:21.497 real 1m13.081s 00:25:21.497 user 4m44.766s 00:25:21.497 sys 0m13.242s 00:25:21.497 14:23:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:21.497 14:23:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.497 ************************************ 00:25:21.497 END TEST nvmf_multiconnection 00:25:21.497 ************************************ 00:25:21.497 14:23:48 nvmf_rdma -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:21.497 14:23:48 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:21.497 14:23:48 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:21.497 14:23:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:21.497 ************************************ 00:25:21.497 START TEST nvmf_initiator_timeout 00:25:21.497 ************************************ 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:21.497 * Looking for test storage... 00:25:21.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.497 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.498 14:23:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:25:24.043 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:25:24.043 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:24.043 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:25:24.044 Found net devices under 0000:81:00.0: mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:25:24.044 Found net devices under 0000:81:00.1: mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:24.044 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:24.044 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:25:24.044 altname enp129s0f0np0 00:25:24.044 inet 192.168.100.8/24 scope global mlx_0_0 00:25:24.044 valid_lft forever preferred_lft forever 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:24.044 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:24.044 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:25:24.044 altname enp129s0f1np1 00:25:24.044 inet 192.168.100.9/24 scope global mlx_0_1 00:25:24.044 valid_lft forever preferred_lft forever 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:24.044 192.168.100.9' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:24.044 192.168.100.9' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:24.044 192.168.100.9' 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:24.044 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=170455 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 170455 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 170455 ']' 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:24.045 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.045 [2024-07-24 14:23:51.401971] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:25:24.045 [2024-07-24 14:23:51.402043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.303 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.303 [2024-07-24 14:23:51.472692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.303 [2024-07-24 14:23:51.564006] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.303 [2024-07-24 14:23:51.564056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.303 [2024-07-24 14:23:51.564098] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.303 [2024-07-24 14:23:51.564111] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.303 [2024-07-24 14:23:51.564122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.303 [2024-07-24 14:23:51.567814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.303 [2024-07-24 14:23:51.567899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.303 [2024-07-24 14:23:51.567932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.303 [2024-07-24 14:23:51.567936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.561 Malloc0 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.561 Delay0 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.561 [2024-07-24 14:23:51.779224] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24c31f0/0x24392c0) succeed. 00:25:24.561 [2024-07-24 14:23:51.790435] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24c3640/0x24c4310) succeed. 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.561 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.818 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.818 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:24.818 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.818 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.818 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.819 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:24.819 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.819 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.819 [2024-07-24 14:23:51.954604] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:24.819 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.819 14:23:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:25.750 14:23:53 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:25.750 14:23:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:25.750 14:23:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.750 14:23:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:25.751 14:23:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=171006 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:28.273 14:23:55 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:28.273 [global] 00:25:28.273 thread=1 00:25:28.273 invalidate=1 00:25:28.273 rw=write 00:25:28.273 time_based=1 00:25:28.273 runtime=60 00:25:28.273 ioengine=libaio 00:25:28.273 direct=1 00:25:28.273 bs=4096 00:25:28.273 iodepth=1 00:25:28.273 norandommap=0 00:25:28.273 numjobs=1 00:25:28.273 00:25:28.273 verify_dump=1 00:25:28.273 verify_backlog=512 00:25:28.273 verify_state_save=0 00:25:28.273 do_verify=1 00:25:28.273 verify=crc32c-intel 00:25:28.273 [job0] 00:25:28.273 filename=/dev/nvme0n1 00:25:28.273 Could not set queue depth (nvme0n1) 00:25:28.273 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:28.273 fio-3.35 00:25:28.273 Starting 1 thread 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 true 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 true 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 true 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 true 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.800 14:23:58 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.077 true 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.077 true 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.077 true 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:34.077 true 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:34.077 14:24:01 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 171006 00:26:30.355 00:26:30.355 job0: (groupid=0, jobs=1): err= 0: pid=171080: Wed Jul 24 14:24:55 2024 00:26:30.355 read: IOPS=1117, BW=4471KiB/s (4579kB/s)(262MiB/60000msec) 00:26:30.355 slat (nsec): min=3922, max=51184, avg=8258.64, stdev=3974.61 00:26:30.355 clat (usec): min=93, max=42819k, avg=757.38, stdev=165333.59 00:26:30.355 lat (usec): min=98, max=42819k, avg=765.63, stdev=165333.60 00:26:30.355 clat percentiles (usec): 00:26:30.355 | 1.00th=[ 100], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 110], 00:26:30.355 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 120], 00:26:30.355 | 70.00th=[ 124], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 143], 00:26:30.355 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 172], 00:26:30.355 | 99.99th=[ 198] 00:26:30.355 write: IOPS=1118, BW=4473KiB/s (4580kB/s)(262MiB/60000msec); 0 zone resets 00:26:30.355 slat (usec): min=4, max=946, avg= 9.94, stdev= 6.32 00:26:30.355 clat (usec): min=26, max=342, avg=114.26, stdev=11.93 00:26:30.355 lat (usec): min=95, max=973, avg=124.19, stdev=14.76 00:26:30.355 clat percentiles (usec): 00:26:30.355 | 1.00th=[ 96], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 104], 00:26:30.355 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 116], 00:26:30.355 | 70.00th=[ 119], 80.00th=[ 124], 90.00th=[ 133], 95.00th=[ 139], 00:26:30.355 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 167], 00:26:30.355 | 99.99th=[ 190] 00:26:30.355 bw ( KiB/s): min= 7584, max=18008, per=100.00%, avg=15420.24, stdev=1805.59, samples=34 00:26:30.355 iops : min= 1896, max= 4502, avg=3855.06, stdev=451.40, samples=34 00:26:30.355 lat (usec) : 50=0.01%, 100=3.88%, 250=96.11%, 500=0.01% 00:26:30.355 lat (msec) : >=2000=0.01% 00:26:30.355 cpu : usr=1.20%, sys=2.42%, ctx=134169, majf=0, minf=109 00:26:30.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.355 issued rwts: total=67072,67094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:30.355 00:26:30.355 Run status group 0 (all jobs): 00:26:30.355 READ: bw=4471KiB/s (4579kB/s), 4471KiB/s-4471KiB/s (4579kB/s-4579kB/s), io=262MiB (275MB), run=60000-60000msec 00:26:30.355 WRITE: bw=4473KiB/s (4580kB/s), 4473KiB/s-4473KiB/s (4580kB/s-4580kB/s), io=262MiB (275MB), run=60000-60000msec 00:26:30.355 00:26:30.355 Disk stats (read/write): 00:26:30.355 nvme0n1: ios=66865/66830, merge=0/0, ticks=7933/7527, in_queue=15460, util=99.72% 00:26:30.355 14:24:55 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:30.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:30.355 nvmf hotplug test: fio successful as expected 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:30.355 rmmod nvme_rdma 00:26:30.355 rmmod nvme_fabrics 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 170455 ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 170455 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 170455 ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 170455 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 170455 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 170455' 00:26:30.355 killing process with pid 170455 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 170455 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 170455 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:30.355 00:26:30.355 real 1m8.326s 00:26:30.355 user 4m22.304s 00:26:30.355 sys 0m3.962s 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:30.355 14:24:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.355 ************************************ 00:26:30.355 END TEST nvmf_initiator_timeout 00:26:30.355 ************************************ 00:26:30.355 14:24:56 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:30.355 14:24:56 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:26:30.355 14:24:56 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:26:30.355 14:24:56 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:26:30.355 14:24:56 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:30.356 14:24:56 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:30.356 14:24:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:30.356 ************************************ 00:26:30.356 START TEST nvmf_srq_overwhelm 00:26:30.356 ************************************ 00:26:30.356 14:24:56 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:26:30.356 * Looking for test storage... 00:26:30.356 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:26:30.356 14:24:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:26:32.256 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:26:32.256 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:26:32.256 Found net devices under 0000:81:00.0: mlx_0_0 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.256 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:26:32.256 Found net devices under 0000:81:00.1: mlx_0_1 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:32.257 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:32.257 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:26:32.257 altname enp129s0f0np0 00:26:32.257 inet 192.168.100.8/24 scope global mlx_0_0 00:26:32.257 valid_lft forever preferred_lft forever 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:32.257 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:32.257 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:26:32.257 altname enp129s0f1np1 00:26:32.257 inet 192.168.100.9/24 scope global mlx_0_1 00:26:32.257 valid_lft forever preferred_lft forever 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:32.257 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:32.516 192.168.100.9' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:32.516 192.168.100.9' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:32.516 192.168.100.9' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=180869 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 180869 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@827 -- # '[' -z 180869 ']' 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:32.516 14:24:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:32.516 [2024-07-24 14:24:59.715216] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:26:32.516 [2024-07-24 14:24:59.715305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.516 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.516 [2024-07-24 14:24:59.790262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.516 [2024-07-24 14:24:59.884305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.516 [2024-07-24 14:24:59.884358] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.516 [2024-07-24 14:24:59.884373] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.516 [2024-07-24 14:24:59.884385] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.516 [2024-07-24 14:24:59.884396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.774 [2024-07-24 14:24:59.886812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.774 [2024-07-24 14:24:59.888454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.774 [2024-07-24 14:24:59.888546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.774 [2024-07-24 14:24:59.888549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # return 0 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:32.774 [2024-07-24 14:25:00.050475] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ba4a00/0x1ba8ef0) succeed. 00:26:32.774 [2024-07-24 14:25:00.061402] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ba5ff0/0x1bea580) succeed. 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:32.774 Malloc0 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.774 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:33.032 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.032 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:33.032 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.032 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:33.032 [2024-07-24 14:25:00.154624] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:33.032 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.032 14:25:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme0n1 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:34.404 Malloc1 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.404 14:25:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme1n1 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme1n1 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:35.337 Malloc2 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.337 14:25:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme2n1 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme2n1 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:36.708 Malloc3 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.708 14:25:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme3n1 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme3n1 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:38.079 Malloc4 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.079 14:25:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme4n1 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme4n1 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.016 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:39.277 Malloc5 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.277 14:25:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:26:40.208 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:26:40.208 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # local i=0 00:26:40.208 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # lsblk -l -o NAME 00:26:40.208 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1232 -- # grep -q -w nvme5n1 00:26:40.466 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:26:40.466 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1238 -- # grep -q -w nvme5n1 00:26:40.466 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # return 0 00:26:40.466 14:25:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:26:40.466 [global] 00:26:40.466 thread=1 00:26:40.466 invalidate=1 00:26:40.466 rw=read 00:26:40.466 time_based=1 00:26:40.466 runtime=10 00:26:40.466 ioengine=libaio 00:26:40.466 direct=1 00:26:40.466 bs=1048576 00:26:40.466 iodepth=128 00:26:40.466 norandommap=1 00:26:40.466 numjobs=13 00:26:40.466 00:26:40.466 [job0] 00:26:40.466 filename=/dev/nvme0n1 00:26:40.466 [job1] 00:26:40.466 filename=/dev/nvme1n1 00:26:40.466 [job2] 00:26:40.466 filename=/dev/nvme2n1 00:26:40.466 [job3] 00:26:40.466 filename=/dev/nvme3n1 00:26:40.466 [job4] 00:26:40.466 filename=/dev/nvme4n1 00:26:40.466 [job5] 00:26:40.466 filename=/dev/nvme5n1 00:26:40.466 Could not set queue depth (nvme0n1) 00:26:40.466 Could not set queue depth (nvme1n1) 00:26:40.466 Could not set queue depth (nvme2n1) 00:26:40.466 Could not set queue depth (nvme3n1) 00:26:40.466 Could not set queue depth (nvme4n1) 00:26:40.466 Could not set queue depth (nvme5n1) 00:26:40.466 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:40.466 ... 00:26:40.466 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:40.466 ... 00:26:40.466 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:40.466 ... 00:26:40.466 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:40.466 ... 00:26:40.466 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:40.466 ... 00:26:40.466 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:26:40.466 ... 00:26:40.466 fio-3.35 00:26:40.466 Starting 78 threads 00:26:55.341 00:26:55.341 job0: (groupid=0, jobs=1): err= 0: pid=182022: Wed Jul 24 14:25:22 2024 00:26:55.341 read: IOPS=0, BW=250KiB/s (256kB/s)(3072KiB/12277msec) 00:26:55.341 slat (msec): min=43, max=10697, avg=4077.93, stdev=5778.50 00:26:55.341 clat (msec): min=42, max=10783, avg=7189.17, stdev=6188.82 00:26:55.341 lat (msec): min=10740, max=12276, avg=11267.11, stdev=874.68 00:26:55.341 clat percentiles (msec): 00:26:55.341 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:26:55.342 | 30.00th=[ 43], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:26:55.342 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:26:55.342 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:26:55.342 | 99.99th=[10805] 00:26:55.342 lat (msec) : 50=33.33%, >=2000=66.67% 00:26:55.342 cpu : usr=0.00%, sys=0.02%, ctx=15, majf=0, minf=769 00:26:55.342 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 issued rwts: total=3,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182023: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=24, BW=24.0MiB/s (25.2MB/s)(296MiB/12333msec) 00:26:55.342 slat (usec): min=71, max=4221.1k, avg=34360.74, stdev=284706.62 00:26:55.342 clat (msec): min=628, max=11081, avg=5052.58, stdev=4791.15 00:26:55.342 lat (msec): min=643, max=11083, avg=5086.94, stdev=4797.36 00:26:55.342 clat percentiles (msec): 00:26:55.342 | 1.00th=[ 634], 5.00th=[ 642], 10.00th=[ 667], 20.00th=[ 785], 00:26:55.342 | 30.00th=[ 860], 40.00th=[ 986], 50.00th=[ 1083], 60.00th=[10000], 00:26:55.342 | 70.00th=[10402], 80.00th=[10805], 90.00th=[10939], 95.00th=[11073], 00:26:55.342 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:26:55.342 | 99.99th=[11073] 00:26:55.342 bw ( KiB/s): min= 2048, max=198656, per=3.13%, avg=57658.50, stdev=76232.60, samples=6 00:26:55.342 iops : min= 2, max= 194, avg=56.00, stdev=74.54, samples=6 00:26:55.342 lat (msec) : 750=16.89%, 1000=24.32%, 2000=14.19%, >=2000=44.59% 00:26:55.342 cpu : usr=0.01%, sys=0.82%, ctx=401, majf=0, minf=32769 00:26:55.342 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.8%, >=64=78.7% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:26:55.342 issued rwts: total=296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182024: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=1, BW=1285KiB/s (1316kB/s)(18.0MiB/14346msec) 00:26:55.342 slat (usec): min=864, max=4354.7k, avg=683205.79, stdev=1443626.08 00:26:55.342 clat (msec): min=2048, max=14345, avg=12707.80, stdev=3494.67 00:26:55.342 lat (msec): min=6284, max=14345, avg=13391.00, stdev=2278.67 00:26:55.342 clat percentiles (msec): 00:26:55.342 | 1.00th=[ 2056], 5.00th=[ 2056], 10.00th=[ 6275], 20.00th=[12818], 00:26:55.342 | 30.00th=[14295], 40.00th=[14295], 50.00th=[14295], 60.00th=[14295], 00:26:55.342 | 70.00th=[14295], 80.00th=[14295], 90.00th=[14295], 95.00th=[14295], 00:26:55.342 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:26:55.342 | 99.99th=[14295] 00:26:55.342 lat (msec) : >=2000=100.00% 00:26:55.342 cpu : usr=0.00%, sys=0.15%, ctx=32, majf=0, minf=4609 00:26:55.342 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.342 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182025: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=1, BW=1998KiB/s (2046kB/s)(24.0MiB/12299msec) 00:26:55.342 slat (usec): min=532, max=6358.4k, avg=421509.21, stdev=1365310.57 00:26:55.342 clat (msec): min=2182, max=12297, avg=10285.35, stdev=3415.61 00:26:55.342 lat (msec): min=4297, max=12298, avg=10706.86, stdev=2966.88 00:26:55.342 clat percentiles (msec): 00:26:55.342 | 1.00th=[ 2198], 5.00th=[ 4329], 10.00th=[ 4329], 20.00th=[ 4396], 00:26:55.342 | 30.00th=[10805], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:26:55.342 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.342 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.342 | 99.99th=[12281] 00:26:55.342 lat (msec) : >=2000=100.00% 00:26:55.342 cpu : usr=0.00%, sys=0.12%, ctx=38, majf=0, minf=6145 00:26:55.342 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.342 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182026: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=45, BW=45.0MiB/s (47.2MB/s)(551MiB/12235msec) 00:26:55.342 slat (usec): min=56, max=2160.3k, avg=18280.48, stdev=165393.42 00:26:55.342 clat (msec): min=279, max=10156, avg=2720.00, stdev=3944.98 00:26:55.342 lat (msec): min=280, max=10156, avg=2738.28, stdev=3955.60 00:26:55.342 clat percentiles (msec): 00:26:55.342 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 317], 00:26:55.342 | 30.00th=[ 380], 40.00th=[ 617], 50.00th=[ 735], 60.00th=[ 743], 00:26:55.342 | 70.00th=[ 802], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10134], 00:26:55.342 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:26:55.342 | 99.99th=[10134] 00:26:55.342 bw ( KiB/s): min= 1565, max=320894, per=5.88%, avg=108357.38, stdev=137819.79, samples=8 00:26:55.342 iops : min= 1, max= 313, avg=105.62, stdev=134.52, samples=8 00:26:55.342 lat (msec) : 500=34.30%, 750=27.04%, 1000=14.52%, 2000=0.18%, >=2000=23.96% 00:26:55.342 cpu : usr=0.03%, sys=0.78%, ctx=479, majf=0, minf=32769 00:26:55.342 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.6% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.342 issued rwts: total=551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182027: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=25, BW=25.9MiB/s (27.1MB/s)(371MiB/14337msec) 00:26:55.342 slat (usec): min=52, max=4321.4k, avg=33114.82, stdev=321352.46 00:26:55.342 clat (msec): min=623, max=14334, avg=4294.90, stdev=5015.39 00:26:55.342 lat (msec): min=625, max=14336, avg=4328.02, stdev=5032.19 00:26:55.342 clat percentiles (msec): 00:26:55.342 | 1.00th=[ 625], 5.00th=[ 625], 10.00th=[ 634], 20.00th=[ 659], 00:26:55.342 | 30.00th=[ 693], 40.00th=[ 701], 50.00th=[ 701], 60.00th=[ 743], 00:26:55.342 | 70.00th=[10805], 80.00th=[10939], 90.00th=[11208], 95.00th=[11342], 00:26:55.342 | 99.00th=[14295], 99.50th=[14295], 99.90th=[14295], 99.95th=[14295], 00:26:55.342 | 99.99th=[14295] 00:26:55.342 bw ( KiB/s): min= 2048, max=190845, per=4.52%, avg=83348.83, stdev=86641.94, samples=6 00:26:55.342 iops : min= 2, max= 186, avg=81.33, stdev=84.52, samples=6 00:26:55.342 lat (msec) : 750=60.38%, 1000=4.85%, >=2000=34.77% 00:26:55.342 cpu : usr=0.01%, sys=0.62%, ctx=298, majf=0, minf=32769 00:26:55.342 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:26:55.342 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182028: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=4, BW=4130KiB/s (4229kB/s)(50.0MiB/12396msec) 00:26:55.342 slat (usec): min=429, max=7848.1k, avg=204931.54, stdev=1143560.19 00:26:55.342 clat (msec): min=2148, max=12395, avg=11809.57, stdev=2116.62 00:26:55.342 lat (msec): min=4284, max=12395, avg=12014.50, stdev=1593.60 00:26:55.342 clat percentiles (msec): 00:26:55.342 | 1.00th=[ 2165], 5.00th=[ 4329], 10.00th=[12147], 20.00th=[12281], 00:26:55.342 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12281], 60.00th=[12416], 00:26:55.342 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:26:55.342 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:26:55.342 | 99.99th=[12416] 00:26:55.342 lat (msec) : >=2000=100.00% 00:26:55.342 cpu : usr=0.00%, sys=0.43%, ctx=71, majf=0, minf=12801 00:26:55.342 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.342 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182029: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=42, BW=42.9MiB/s (45.0MB/s)(530MiB/12357msec) 00:26:55.342 slat (usec): min=44, max=2137.0k, avg=19216.05, stdev=158389.29 00:26:55.342 clat (msec): min=634, max=9213, avg=2871.24, stdev=3352.96 00:26:55.342 lat (msec): min=635, max=9217, avg=2890.46, stdev=3360.93 00:26:55.342 clat percentiles (msec): 00:26:55.342 | 1.00th=[ 667], 5.00th=[ 709], 10.00th=[ 718], 20.00th=[ 735], 00:26:55.342 | 30.00th=[ 802], 40.00th=[ 894], 50.00th=[ 1070], 60.00th=[ 1318], 00:26:55.342 | 70.00th=[ 1452], 80.00th=[ 8658], 90.00th=[ 8926], 95.00th=[ 9060], 00:26:55.342 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:26:55.342 | 99.99th=[ 9194] 00:26:55.342 bw ( KiB/s): min= 1408, max=186368, per=4.47%, avg=82469.90, stdev=69366.94, samples=10 00:26:55.342 iops : min= 1, max= 182, avg=80.40, stdev=67.92, samples=10 00:26:55.342 lat (msec) : 750=25.28%, 1000=20.94%, 2000=29.06%, >=2000=24.72% 00:26:55.342 cpu : usr=0.01%, sys=0.93%, ctx=636, majf=0, minf=32769 00:26:55.342 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:26:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.342 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.342 issued rwts: total=530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.342 job0: (groupid=0, jobs=1): err= 0: pid=182030: Wed Jul 24 14:25:22 2024 00:26:55.342 read: IOPS=14, BW=14.7MiB/s (15.4MB/s)(181MiB/12333msec) 00:26:55.342 slat (usec): min=77, max=4211.5k, avg=56076.08, stdev=380833.80 00:26:55.342 clat (msec): min=946, max=11677, avg=8149.14, stdev=4494.11 00:26:55.342 lat (msec): min=957, max=11678, avg=8205.21, stdev=4474.05 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 953], 5.00th=[ 978], 10.00th=[ 1028], 20.00th=[ 1200], 00:26:55.343 | 30.00th=[ 7349], 40.00th=[10805], 50.00th=[10939], 60.00th=[11073], 00:26:55.343 | 70.00th=[11208], 80.00th=[11342], 90.00th=[11476], 95.00th=[11610], 00:26:55.343 | 99.00th=[11610], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:26:55.343 | 99.99th=[11745] 00:26:55.343 bw ( KiB/s): min= 1418, max=77824, per=0.99%, avg=18327.00, stdev=30029.58, samples=6 00:26:55.343 iops : min= 1, max= 76, avg=17.83, stdev=29.37, samples=6 00:26:55.343 lat (msec) : 1000=7.18%, 2000=19.34%, >=2000=73.48% 00:26:55.343 cpu : usr=0.02%, sys=0.66%, ctx=296, majf=0, minf=32769 00:26:55.343 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.4%, 16=8.8%, 32=17.7%, >=64=65.2% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=98.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.8% 00:26:55.343 issued rwts: total=181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job0: (groupid=0, jobs=1): err= 0: pid=182031: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=1, BW=1249KiB/s (1279kB/s)(15.0MiB/12296msec) 00:26:55.343 slat (usec): min=596, max=8523.3k, avg=816816.25, stdev=2224580.75 00:26:55.343 clat (msec): min=43, max=12294, avg=11099.95, stdev=3219.66 00:26:55.343 lat (msec): min=8566, max=12295, avg=11916.77, stdev=1010.47 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 8557], 20.00th=[10671], 00:26:55.343 | 30.00th=[12147], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:26:55.343 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.343 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.343 | 99.99th=[12281] 00:26:55.343 lat (msec) : 50=6.67%, >=2000=93.33% 00:26:55.343 cpu : usr=0.00%, sys=0.08%, ctx=25, majf=0, minf=3841 00:26:55.343 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job0: (groupid=0, jobs=1): err= 0: pid=182032: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=0, BW=750KiB/s (768kB/s)(9216KiB/12292msec) 00:26:55.343 slat (usec): min=903, max=10670k, avg=1360406.60, stdev=3518897.43 00:26:55.343 clat (msec): min=47, max=12290, avg=10569.69, stdev=4000.12 00:26:55.343 lat (msec): min=10717, max=12291, avg=11930.10, stdev=670.75 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 48], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[10671], 00:26:55.343 | 30.00th=[10805], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:26:55.343 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.343 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.343 | 99.99th=[12281] 00:26:55.343 lat (msec) : 50=11.11%, >=2000=88.89% 00:26:55.343 cpu : usr=0.00%, sys=0.06%, ctx=20, majf=0, minf=2305 00:26:55.343 IO depths : 1=11.1%, 2=22.2%, 4=44.4%, 8=22.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 issued rwts: total=9,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job0: (groupid=0, jobs=1): err= 0: pid=182033: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=1, BW=1330KiB/s (1362kB/s)(16.0MiB/12321msec) 00:26:55.343 slat (usec): min=619, max=8521.9k, avg=767056.56, stdev=2164524.05 00:26:55.343 clat (msec): min=47, max=12318, avg=10876.57, stdev=3167.79 00:26:55.343 lat (msec): min=8569, max=12320, avg=11643.63, stdev=1314.75 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 48], 5.00th=[ 48], 10.00th=[ 8557], 20.00th=[10671], 00:26:55.343 | 30.00th=[10671], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:26:55.343 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.343 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.343 | 99.99th=[12281] 00:26:55.343 lat (msec) : 50=6.25%, >=2000=93.75% 00:26:55.343 cpu : usr=0.00%, sys=0.15%, ctx=28, majf=0, minf=4097 00:26:55.343 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job0: (groupid=0, jobs=1): err= 0: pid=182034: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=2, BW=2079KiB/s (2128kB/s)(25.0MiB/12316msec) 00:26:55.343 slat (usec): min=449, max=5700.2k, avg=406814.13, stdev=1394789.17 00:26:55.343 clat (msec): min=2144, max=12313, avg=11637.64, stdev=2298.13 00:26:55.343 lat (msec): min=6424, max=12315, avg=12044.45, stdev=1171.87 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 2140], 5.00th=[ 6409], 10.00th=[12147], 20.00th=[12281], 00:26:55.343 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:26:55.343 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.343 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.343 | 99.99th=[12281] 00:26:55.343 lat (msec) : >=2000=100.00% 00:26:55.343 cpu : usr=0.01%, sys=0.16%, ctx=38, majf=0, minf=6401 00:26:55.343 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.343 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job1: (groupid=0, jobs=1): err= 0: pid=182043: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=4, BW=4755KiB/s (4870kB/s)(57.0MiB/12274msec) 00:26:55.343 slat (usec): min=427, max=2150.2k, avg=177069.40, stdev=554711.55 00:26:55.343 clat (msec): min=2180, max=12271, avg=8816.80, stdev=3206.57 00:26:55.343 lat (msec): min=4258, max=12273, avg=8993.87, stdev=3110.79 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:26:55.343 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8658], 60.00th=[10805], 00:26:55.343 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.343 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.343 | 99.99th=[12281] 00:26:55.343 lat (msec) : >=2000=100.00% 00:26:55.343 cpu : usr=0.00%, sys=0.24%, ctx=58, majf=0, minf=14593 00:26:55.343 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.343 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job1: (groupid=0, jobs=1): err= 0: pid=182044: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=2, BW=2677KiB/s (2741kB/s)(32.0MiB/12240msec) 00:26:55.343 slat (usec): min=777, max=4183.1k, avg=314791.23, stdev=897269.24 00:26:55.343 clat (msec): min=2165, max=12238, avg=9096.78, stdev=3667.94 00:26:55.343 lat (msec): min=4215, max=12239, avg=9411.57, stdev=3481.42 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:26:55.343 | 30.00th=[ 4329], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:26:55.343 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.343 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.343 | 99.99th=[12281] 00:26:55.343 lat (msec) : >=2000=100.00% 00:26:55.343 cpu : usr=0.00%, sys=0.17%, ctx=45, majf=0, minf=8193 00:26:55.343 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.343 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job1: (groupid=0, jobs=1): err= 0: pid=182045: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=3, BW=3967KiB/s (4062kB/s)(48.0MiB/12390msec) 00:26:55.343 slat (usec): min=494, max=6339.1k, avg=212635.97, stdev=976558.70 00:26:55.343 clat (msec): min=2182, max=12388, avg=11755.65, stdev=2149.36 00:26:55.343 lat (msec): min=4330, max=12389, avg=11968.28, stdev=1622.47 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 2198], 5.00th=[ 4396], 10.00th=[12147], 20.00th=[12281], 00:26:55.343 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12281], 60.00th=[12416], 00:26:55.343 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:26:55.343 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:26:55.343 | 99.99th=[12416] 00:26:55.343 lat (msec) : >=2000=100.00% 00:26:55.343 cpu : usr=0.00%, sys=0.39%, ctx=76, majf=0, minf=12289 00:26:55.343 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:26:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.343 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.343 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.343 job1: (groupid=0, jobs=1): err= 0: pid=182046: Wed Jul 24 14:25:22 2024 00:26:55.343 read: IOPS=1, BW=1667KiB/s (1707kB/s)(20.0MiB/12287msec) 00:26:55.343 slat (usec): min=579, max=2143.4k, avg=505119.72, stdev=888447.49 00:26:55.343 clat (msec): min=2183, max=12284, avg=8377.20, stdev=3694.47 00:26:55.343 lat (msec): min=4267, max=12286, avg=8882.32, stdev=3487.95 00:26:55.343 clat percentiles (msec): 00:26:55.343 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 4279], 20.00th=[ 4279], 00:26:55.343 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:26:55.343 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.343 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.343 | 99.99th=[12281] 00:26:55.343 lat (msec) : >=2000=100.00% 00:26:55.344 cpu : usr=0.00%, sys=0.10%, ctx=33, majf=0, minf=5121 00:26:55.344 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.344 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182047: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=0, BW=417KiB/s (427kB/s)(5120KiB/12273msec) 00:26:55.344 slat (msec): min=7, max=5843, avg=2017.52, stdev=2384.15 00:26:55.344 clat (msec): min=2185, max=12264, avg=7483.96, stdev=4609.84 00:26:55.344 lat (msec): min=4299, max=12272, avg=9501.48, stdev=3857.02 00:26:55.344 clat percentiles (msec): 00:26:55.344 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 2198], 20.00th=[ 2198], 00:26:55.344 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6409], 00:26:55.344 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.344 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.344 | 99.99th=[12281] 00:26:55.344 lat (msec) : >=2000=100.00% 00:26:55.344 cpu : usr=0.00%, sys=0.02%, ctx=22, majf=0, minf=1281 00:26:55.344 IO depths : 1=20.0%, 2=40.0%, 4=40.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 issued rwts: total=5,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182048: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=24, BW=24.2MiB/s (25.3MB/s)(298MiB/12329msec) 00:26:55.344 slat (usec): min=64, max=2072.6k, avg=34062.69, stdev=236543.82 00:26:55.344 clat (msec): min=651, max=12307, avg=5086.10, stdev=4758.43 00:26:55.344 lat (msec): min=656, max=12309, avg=5120.16, stdev=4766.77 00:26:55.344 clat percentiles (msec): 00:26:55.344 | 1.00th=[ 659], 5.00th=[ 667], 10.00th=[ 676], 20.00th=[ 701], 00:26:55.344 | 30.00th=[ 709], 40.00th=[ 735], 50.00th=[ 2735], 60.00th=[ 6946], 00:26:55.344 | 70.00th=[10671], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208], 00:26:55.344 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.344 | 99.99th=[12281] 00:26:55.344 bw ( KiB/s): min= 1400, max=186368, per=2.37%, avg=43691.38, stdev=62057.45, samples=8 00:26:55.344 iops : min= 1, max= 182, avg=42.50, stdev=60.71, samples=8 00:26:55.344 lat (msec) : 750=41.28%, 1000=8.05%, >=2000=50.67% 00:26:55.344 cpu : usr=0.00%, sys=0.77%, ctx=250, majf=0, minf=32769 00:26:55.344 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.7%, >=64=78.9% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:26:55.344 issued rwts: total=298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182050: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=47, BW=47.8MiB/s (50.1MB/s)(586MiB/12260msec) 00:26:55.344 slat (usec): min=46, max=2616.0k, avg=17075.46, stdev=153204.09 00:26:55.344 clat (msec): min=703, max=10700, avg=2235.04, stdev=1955.38 00:26:55.344 lat (msec): min=706, max=12259, avg=2252.11, stdev=1985.50 00:26:55.344 clat percentiles (msec): 00:26:55.344 | 1.00th=[ 709], 5.00th=[ 709], 10.00th=[ 709], 20.00th=[ 726], 00:26:55.344 | 30.00th=[ 785], 40.00th=[ 802], 50.00th=[ 902], 60.00th=[ 2400], 00:26:55.344 | 70.00th=[ 2735], 80.00th=[ 5403], 90.00th=[ 5537], 95.00th=[ 5604], 00:26:55.344 | 99.00th=[ 5671], 99.50th=[ 6409], 99.90th=[10671], 99.95th=[10671], 00:26:55.344 | 99.99th=[10671] 00:26:55.344 bw ( KiB/s): min= 1544, max=190464, per=5.09%, avg=93889.10, stdev=67756.71, samples=10 00:26:55.344 iops : min= 1, max= 186, avg=91.60, stdev=66.21, samples=10 00:26:55.344 lat (msec) : 750=24.40%, 1000=32.59%, >=2000=43.00% 00:26:55.344 cpu : usr=0.03%, sys=0.89%, ctx=487, majf=0, minf=32769 00:26:55.344 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.344 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182051: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=2, BW=2076KiB/s (2125kB/s)(25.0MiB/12334msec) 00:26:55.344 slat (usec): min=495, max=4227.7k, avg=405625.37, stdev=1025168.27 00:26:55.344 clat (msec): min=2192, max=12329, avg=10993.08, stdev=2930.32 00:26:55.344 lat (msec): min=4290, max=12333, avg=11398.70, stdev=2294.20 00:26:55.344 clat percentiles (msec): 00:26:55.344 | 1.00th=[ 2198], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[10671], 00:26:55.344 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:26:55.344 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.344 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.344 | 99.99th=[12281] 00:26:55.344 lat (msec) : >=2000=100.00% 00:26:55.344 cpu : usr=0.00%, sys=0.19%, ctx=45, majf=0, minf=6401 00:26:55.344 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.344 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182052: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=0, BW=1002KiB/s (1026kB/s)(12.0MiB/12269msec) 00:26:55.344 slat (usec): min=820, max=4192.5k, avg=841953.04, stdev=1353394.69 00:26:55.344 clat (msec): min=2165, max=12266, avg=7660.28, stdev=4042.00 00:26:55.344 lat (msec): min=4271, max=12268, avg=8502.23, stdev=3840.59 00:26:55.344 clat percentiles (msec): 00:26:55.344 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 4329], 00:26:55.344 | 30.00th=[ 4329], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[10671], 00:26:55.344 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.344 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.344 | 99.99th=[12281] 00:26:55.344 lat (msec) : >=2000=100.00% 00:26:55.344 cpu : usr=0.00%, sys=0.06%, ctx=35, majf=0, minf=3073 00:26:55.344 IO depths : 1=8.3%, 2=16.7%, 4=33.3%, 8=41.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 issued rwts: total=12,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182053: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=73, BW=73.5MiB/s (77.1MB/s)(904MiB/12297msec) 00:26:55.344 slat (usec): min=54, max=4227.7k, avg=11213.79, stdev=156707.42 00:26:55.344 clat (msec): min=261, max=12281, avg=1671.17, stdev=2978.14 00:26:55.344 lat (msec): min=264, max=12282, avg=1682.38, stdev=2987.13 00:26:55.344 clat percentiles (msec): 00:26:55.344 | 1.00th=[ 264], 5.00th=[ 275], 10.00th=[ 275], 20.00th=[ 305], 00:26:55.344 | 30.00th=[ 409], 40.00th=[ 506], 50.00th=[ 527], 60.00th=[ 558], 00:26:55.344 | 70.00th=[ 575], 80.00th=[ 609], 90.00th=[ 8792], 95.00th=[ 8926], 00:26:55.344 | 99.00th=[ 8926], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.344 | 99.99th=[12281] 00:26:55.344 bw ( KiB/s): min= 1436, max=440320, per=9.59%, avg=176742.67, stdev=142845.39, samples=9 00:26:55.344 iops : min= 1, max= 430, avg=172.56, stdev=139.56, samples=9 00:26:55.344 lat (msec) : 500=38.61%, 750=46.68%, >=2000=14.71% 00:26:55.344 cpu : usr=0.06%, sys=0.97%, ctx=809, majf=0, minf=32769 00:26:55.344 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.344 issued rwts: total=904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182054: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=3, BW=3314KiB/s (3394kB/s)(40.0MiB/12358msec) 00:26:55.344 slat (usec): min=544, max=4206.1k, avg=254595.53, stdev=822859.82 00:26:55.344 clat (msec): min=2173, max=12355, avg=11126.06, stdev=2870.64 00:26:55.344 lat (msec): min=4242, max=12357, avg=11380.66, stdev=2481.55 00:26:55.344 clat percentiles (msec): 00:26:55.344 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[12281], 00:26:55.344 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:26:55.344 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:26:55.344 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:26:55.344 | 99.99th=[12416] 00:26:55.344 lat (msec) : >=2000=100.00% 00:26:55.344 cpu : usr=0.00%, sys=0.33%, ctx=59, majf=0, minf=10241 00:26:55.344 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:26:55.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.344 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.344 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.344 job1: (groupid=0, jobs=1): err= 0: pid=182055: Wed Jul 24 14:25:22 2024 00:26:55.344 read: IOPS=1, BW=1167KiB/s (1195kB/s)(14.0MiB/12284msec) 00:26:55.344 slat (msec): min=6, max=4334, avg=721.90, stdev=1305.95 00:26:55.345 clat (msec): min=2176, max=12265, avg=9650.24, stdev=3647.15 00:26:55.345 lat (msec): min=4298, max=12283, avg=10372.14, stdev=2996.28 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 2165], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 4329], 00:26:55.345 | 30.00th=[10805], 40.00th=[10805], 50.00th=[10805], 60.00th=[12147], 00:26:55.345 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.345 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.345 | 99.99th=[12281] 00:26:55.345 lat (msec) : >=2000=100.00% 00:26:55.345 cpu : usr=0.01%, sys=0.07%, ctx=39, majf=0, minf=3585 00:26:55.345 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.345 job1: (groupid=0, jobs=1): err= 0: pid=182056: Wed Jul 24 14:25:22 2024 00:26:55.345 read: IOPS=1, BW=1084KiB/s (1110kB/s)(13.0MiB/12278msec) 00:26:55.345 slat (usec): min=822, max=4192.5k, avg=776612.07, stdev=1501526.53 00:26:55.345 clat (msec): min=2181, max=12276, avg=8134.06, stdev=4166.74 00:26:55.345 lat (msec): min=4290, max=12277, avg=8910.67, stdev=3896.96 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 2198], 5.00th=[ 2198], 10.00th=[ 4279], 20.00th=[ 4329], 00:26:55.345 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 8557], 60.00th=[12147], 00:26:55.345 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.345 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.345 | 99.99th=[12281] 00:26:55.345 lat (msec) : >=2000=100.00% 00:26:55.345 cpu : usr=0.00%, sys=0.06%, ctx=28, majf=0, minf=3329 00:26:55.345 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.345 job2: (groupid=0, jobs=1): err= 0: pid=182068: Wed Jul 24 14:25:22 2024 00:26:55.345 read: IOPS=30, BW=30.1MiB/s (31.5MB/s)(371MiB/12342msec) 00:26:55.345 slat (usec): min=57, max=2083.6k, avg=27456.81, stdev=212304.86 00:26:55.345 clat (msec): min=519, max=12212, avg=4137.61, stdev=4560.08 00:26:55.345 lat (msec): min=521, max=12277, avg=4165.07, stdev=4572.12 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 523], 5.00th=[ 527], 10.00th=[ 527], 20.00th=[ 575], 00:26:55.345 | 30.00th=[ 575], 40.00th=[ 592], 50.00th=[ 659], 60.00th=[ 2702], 00:26:55.345 | 70.00th=[ 6946], 80.00th=[10805], 90.00th=[11073], 95.00th=[11073], 00:26:55.345 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:26:55.345 | 99.99th=[12147] 00:26:55.345 bw ( KiB/s): min= 1950, max=215040, per=3.39%, avg=62451.75, stdev=81732.63, samples=8 00:26:55.345 iops : min= 1, max= 210, avg=60.88, stdev=79.91, samples=8 00:26:55.345 lat (msec) : 750=57.14%, >=2000=42.86% 00:26:55.345 cpu : usr=0.01%, sys=0.86%, ctx=322, majf=0, minf=32393 00:26:55.345 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:26:55.345 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.345 job2: (groupid=0, jobs=1): err= 0: pid=182069: Wed Jul 24 14:25:22 2024 00:26:55.345 read: IOPS=3, BW=3255KiB/s (3333kB/s)(39.0MiB/12271msec) 00:26:55.345 slat (usec): min=446, max=2106.9k, avg=259486.99, stdev=651155.15 00:26:55.345 clat (msec): min=2150, max=12270, avg=9806.81, stdev=2952.75 00:26:55.345 lat (msec): min=4257, max=12270, avg=10066.30, stdev=2695.72 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 6477], 00:26:55.345 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[10805], 60.00th=[12147], 00:26:55.345 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.345 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.345 | 99.99th=[12281] 00:26:55.345 lat (msec) : >=2000=100.00% 00:26:55.345 cpu : usr=0.00%, sys=0.20%, ctx=54, majf=0, minf=9985 00:26:55.345 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.345 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.345 job2: (groupid=0, jobs=1): err= 0: pid=182070: Wed Jul 24 14:25:22 2024 00:26:55.345 read: IOPS=49, BW=49.2MiB/s (51.6MB/s)(495MiB/10067msec) 00:26:55.345 slat (usec): min=52, max=2126.8k, avg=20203.40, stdev=147697.21 00:26:55.345 clat (msec): min=62, max=7231, avg=979.67, stdev=761.43 00:26:55.345 lat (msec): min=72, max=7242, avg=999.88, stdev=812.12 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 85], 5.00th=[ 228], 10.00th=[ 368], 20.00th=[ 667], 00:26:55.345 | 30.00th=[ 810], 40.00th=[ 844], 50.00th=[ 919], 60.00th=[ 1045], 00:26:55.345 | 70.00th=[ 1116], 80.00th=[ 1167], 90.00th=[ 1234], 95.00th=[ 1452], 00:26:55.345 | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:26:55.345 | 99.99th=[ 7215] 00:26:55.345 bw ( KiB/s): min=63488, max=161792, per=6.81%, avg=125564.33, stdev=36538.74, samples=6 00:26:55.345 iops : min= 62, max= 158, avg=122.50, stdev=35.63, samples=6 00:26:55.345 lat (msec) : 100=1.41%, 250=4.44%, 500=8.48%, 750=7.68%, 1000=35.35% 00:26:55.345 lat (msec) : 2000=41.01%, >=2000=1.62% 00:26:55.345 cpu : usr=0.02%, sys=1.24%, ctx=694, majf=0, minf=32769 00:26:55.345 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:55.345 issued rwts: total=495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.345 job2: (groupid=0, jobs=1): err= 0: pid=182071: Wed Jul 24 14:25:22 2024 00:26:55.345 read: IOPS=3, BW=3674KiB/s (3762kB/s)(44.0MiB/12265msec) 00:26:55.345 slat (usec): min=664, max=2076.7k, avg=230016.63, stdev=619779.08 00:26:55.345 clat (msec): min=2143, max=12260, avg=8680.09, stdev=3444.20 00:26:55.345 lat (msec): min=4184, max=12264, avg=8910.11, stdev=3333.69 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:26:55.345 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[12013], 00:26:55.345 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.345 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.345 | 99.99th=[12281] 00:26:55.345 lat (msec) : >=2000=100.00% 00:26:55.345 cpu : usr=0.00%, sys=0.31%, ctx=71, majf=0, minf=11265 00:26:55.345 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.345 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.345 job2: (groupid=0, jobs=1): err= 0: pid=182072: Wed Jul 24 14:25:22 2024 00:26:55.345 read: IOPS=42, BW=42.2MiB/s (44.3MB/s)(520MiB/12321msec) 00:26:55.345 slat (usec): min=46, max=2072.7k, avg=19511.27, stdev=153886.02 00:26:55.345 clat (msec): min=500, max=8602, avg=1732.22, stdev=1843.70 00:26:55.345 lat (msec): min=502, max=8627, avg=1751.73, stdev=1870.51 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 502], 5.00th=[ 510], 10.00th=[ 542], 20.00th=[ 550], 00:26:55.345 | 30.00th=[ 558], 40.00th=[ 609], 50.00th=[ 709], 60.00th=[ 793], 00:26:55.345 | 70.00th=[ 902], 80.00th=[ 3809], 90.00th=[ 4279], 95.00th=[ 4396], 00:26:55.345 | 99.00th=[ 6678], 99.50th=[ 8557], 99.90th=[ 8658], 99.95th=[ 8658], 00:26:55.345 | 99.99th=[ 8658] 00:26:55.345 bw ( KiB/s): min= 1408, max=235520, per=7.27%, avg=134037.33, stdev=88618.99, samples=6 00:26:55.345 iops : min= 1, max= 230, avg=130.83, stdev=86.65, samples=6 00:26:55.345 lat (msec) : 750=55.00%, 1000=16.35%, >=2000=28.65% 00:26:55.345 cpu : usr=0.03%, sys=0.91%, ctx=441, majf=0, minf=32769 00:26:55.345 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.345 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:55.345 issued rwts: total=520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.345 job2: (groupid=0, jobs=1): err= 0: pid=182073: Wed Jul 24 14:25:22 2024 00:26:55.345 read: IOPS=54, BW=54.8MiB/s (57.4MB/s)(674MiB/12306msec) 00:26:55.345 slat (usec): min=57, max=2142.1k, avg=15054.05, stdev=147381.92 00:26:55.345 clat (msec): min=255, max=10769, avg=1782.25, stdev=2533.95 00:26:55.345 lat (msec): min=258, max=10784, avg=1797.31, stdev=2551.14 00:26:55.345 clat percentiles (msec): 00:26:55.345 | 1.00th=[ 259], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 321], 00:26:55.345 | 30.00th=[ 384], 40.00th=[ 409], 50.00th=[ 422], 60.00th=[ 468], 00:26:55.345 | 70.00th=[ 558], 80.00th=[ 4178], 90.00th=[ 6611], 95.00th=[ 6678], 00:26:55.345 | 99.00th=[ 8658], 99.50th=[10671], 99.90th=[10805], 99.95th=[10805], 00:26:55.345 | 99.99th=[10805] 00:26:55.345 bw ( KiB/s): min= 1383, max=350208, per=8.67%, avg=159861.43, stdev=151844.25, samples=7 00:26:55.345 iops : min= 1, max= 342, avg=156.00, stdev=148.29, samples=7 00:26:55.345 lat (msec) : 500=63.06%, 750=11.57%, >=2000=25.37% 00:26:55.345 cpu : usr=0.02%, sys=0.98%, ctx=612, majf=0, minf=32769 00:26:55.345 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:26:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.346 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.346 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.346 job2: (groupid=0, jobs=1): err= 0: pid=182074: Wed Jul 24 14:25:22 2024 00:26:55.346 read: IOPS=3, BW=3336KiB/s (3416kB/s)(40.0MiB/12277msec) 00:26:55.346 slat (usec): min=415, max=2081.7k, avg=252832.03, stdev=658643.61 00:26:55.346 clat (msec): min=2162, max=12274, avg=8010.95, stdev=3334.50 00:26:55.346 lat (msec): min=4223, max=12276, avg=8263.78, stdev=3262.33 00:26:55.346 clat percentiles (msec): 00:26:55.346 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4329], 00:26:55.346 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8557], 00:26:55.346 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.346 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.346 | 99.99th=[12281] 00:26:55.346 lat (msec) : >=2000=100.00% 00:26:55.346 cpu : usr=0.00%, sys=0.27%, ctx=54, majf=0, minf=10241 00:26:55.346 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:26:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.346 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.346 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.346 job2: (groupid=0, jobs=1): err= 0: pid=182075: Wed Jul 24 14:25:22 2024 00:26:55.346 read: IOPS=27, BW=27.3MiB/s (28.6MB/s)(337MiB/12362msec) 00:26:55.346 slat (usec): min=55, max=2140.6k, avg=30230.59, stdev=225081.81 00:26:55.346 clat (msec): min=527, max=12269, avg=4537.93, stdev=4835.69 00:26:55.346 lat (msec): min=532, max=12275, avg=4568.16, stdev=4845.19 00:26:55.346 clat percentiles (msec): 00:26:55.346 | 1.00th=[ 531], 5.00th=[ 567], 10.00th=[ 592], 20.00th=[ 651], 00:26:55.346 | 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 709], 60.00th=[ 4279], 00:26:55.346 | 70.00th=[10805], 80.00th=[10939], 90.00th=[11073], 95.00th=[11073], 00:26:55.346 | 99.00th=[11208], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.346 | 99.99th=[12281] 00:26:55.346 bw ( KiB/s): min= 1992, max=229376, per=3.33%, avg=61417.86, stdev=86057.31, samples=7 00:26:55.346 iops : min= 1, max= 224, avg=59.71, stdev=84.17, samples=7 00:26:55.346 lat (msec) : 750=57.27%, 1000=1.19%, >=2000=41.54% 00:26:55.346 cpu : usr=0.02%, sys=0.99%, ctx=313, majf=0, minf=32769 00:26:55.346 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.3% 00:26:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.346 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:26:55.346 issued rwts: total=337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.346 job2: (groupid=0, jobs=1): err= 0: pid=182076: Wed Jul 24 14:25:22 2024 00:26:55.346 read: IOPS=2, BW=2340KiB/s (2396kB/s)(28.0MiB/12253msec) 00:26:55.346 slat (usec): min=642, max=2110.4k, avg=360462.36, stdev=748396.87 00:26:55.346 clat (msec): min=2159, max=12251, avg=8940.96, stdev=3111.49 00:26:55.346 lat (msec): min=4270, max=12252, avg=9301.42, stdev=2872.23 00:26:55.346 clat percentiles (msec): 00:26:55.346 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:26:55.346 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10805], 00:26:55.346 | 70.00th=[10805], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:26:55.346 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.346 | 99.99th=[12281] 00:26:55.346 lat (msec) : >=2000=100.00% 00:26:55.346 cpu : usr=0.00%, sys=0.15%, ctx=50, majf=0, minf=7169 00:26:55.346 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:26:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.346 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.346 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.346 job2: (groupid=0, jobs=1): err= 0: pid=182077: Wed Jul 24 14:25:22 2024 00:26:55.346 read: IOPS=47, BW=47.1MiB/s (49.4MB/s)(578MiB/12265msec) 00:26:55.346 slat (usec): min=63, max=2126.8k, avg=17485.89, stdev=133814.31 00:26:55.346 clat (msec): min=636, max=6867, avg=2305.56, stdev=1750.63 00:26:55.346 lat (msec): min=644, max=6963, avg=2323.04, stdev=1759.10 00:26:55.346 clat percentiles (msec): 00:26:55.346 | 1.00th=[ 642], 5.00th=[ 659], 10.00th=[ 659], 20.00th=[ 684], 00:26:55.346 | 30.00th=[ 693], 40.00th=[ 1116], 50.00th=[ 1318], 60.00th=[ 2970], 00:26:55.346 | 70.00th=[ 3373], 80.00th=[ 4463], 90.00th=[ 5000], 95.00th=[ 5201], 00:26:55.346 | 99.00th=[ 5604], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:26:55.346 | 99.99th=[ 6879] 00:26:55.346 bw ( KiB/s): min= 1458, max=184320, per=4.55%, avg=83914.36, stdev=59846.20, samples=11 00:26:55.346 iops : min= 1, max= 180, avg=81.91, stdev=58.50, samples=11 00:26:55.346 lat (msec) : 750=38.75%, 1000=1.21%, 2000=14.36%, >=2000=45.67% 00:26:55.346 cpu : usr=0.01%, sys=0.99%, ctx=726, majf=0, minf=32769 00:26:55.346 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:26:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.346 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.346 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.346 job2: (groupid=0, jobs=1): err= 0: pid=182078: Wed Jul 24 14:25:22 2024 00:26:55.346 read: IOPS=2, BW=2418KiB/s (2476kB/s)(29.0MiB/12279msec) 00:26:55.346 slat (usec): min=459, max=2084.5k, avg=348519.05, stdev=756095.66 00:26:55.346 clat (msec): min=2171, max=12275, avg=6690.56, stdev=2629.71 00:26:55.346 lat (msec): min=4242, max=12278, avg=7039.08, stdev=2678.70 00:26:55.346 clat percentiles (msec): 00:26:55.346 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 4329], 00:26:55.346 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 6477], 00:26:55.346 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[12281], 00:26:55.346 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.346 | 99.99th=[12281] 00:26:55.346 lat (msec) : >=2000=100.00% 00:26:55.346 cpu : usr=0.00%, sys=0.13%, ctx=44, majf=0, minf=7425 00:26:55.346 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:26:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.346 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.346 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.346 job2: (groupid=0, jobs=1): err= 0: pid=182080: Wed Jul 24 14:25:22 2024 00:26:55.346 read: IOPS=4, BW=4267KiB/s (4369kB/s)(51.0MiB/12239msec) 00:26:55.346 slat (usec): min=490, max=2064.1k, avg=198067.86, stdev=572893.50 00:26:55.346 clat (msec): min=2137, max=12238, avg=8849.32, stdev=3158.96 00:26:55.346 lat (msec): min=4201, max=12238, avg=9047.39, stdev=3044.29 00:26:55.346 clat percentiles (msec): 00:26:55.346 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6342], 00:26:55.346 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10671], 00:26:55.346 | 70.00th=[12013], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.346 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.346 | 99.99th=[12281] 00:26:55.346 lat (msec) : >=2000=100.00% 00:26:55.346 cpu : usr=0.00%, sys=0.20%, ctx=67, majf=0, minf=13057 00:26:55.346 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:26:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.346 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.346 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job2: (groupid=0, jobs=1): err= 0: pid=182081: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=3, BW=3821KiB/s (3912kB/s)(46.0MiB/12329msec) 00:26:55.347 slat (usec): min=452, max=2088.7k, avg=221168.02, stdev=613256.47 00:26:55.347 clat (msec): min=2154, max=12326, avg=9945.22, stdev=3185.36 00:26:55.347 lat (msec): min=4202, max=12327, avg=10166.39, stdev=2978.88 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:26:55.347 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12147], 60.00th=[12281], 00:26:55.347 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.347 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.347 | 99.99th=[12281] 00:26:55.347 lat (msec) : >=2000=100.00% 00:26:55.347 cpu : usr=0.00%, sys=0.34%, ctx=76, majf=0, minf=11777 00:26:55.347 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:26:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.347 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.347 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job3: (groupid=0, jobs=1): err= 0: pid=182084: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=152, BW=153MiB/s (160MB/s)(1864MiB/12191msec) 00:26:55.347 slat (usec): min=37, max=2074.6k, avg=5396.75, stdev=88439.36 00:26:55.347 clat (msec): min=129, max=12051, avg=575.81, stdev=1608.48 00:26:55.347 lat (msec): min=130, max=12190, avg=581.20, stdev=1625.62 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 131], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 132], 00:26:55.347 | 30.00th=[ 133], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 134], 00:26:55.347 | 70.00th=[ 134], 80.00th=[ 134], 90.00th=[ 136], 95.00th=[ 6477], 00:26:55.347 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[10671], 99.95th=[12013], 00:26:55.347 | 99.99th=[12013] 00:26:55.347 bw ( KiB/s): min= 1535, max=954368, per=24.11%, avg=444607.88, stdev=441505.16, samples=8 00:26:55.347 iops : min= 1, max= 932, avg=434.12, stdev=431.23, samples=8 00:26:55.347 lat (msec) : 250=92.49%, 500=0.05%, >=2000=7.46% 00:26:55.347 cpu : usr=0.07%, sys=1.40%, ctx=1734, majf=0, minf=32769 00:26:55.347 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.347 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.347 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job3: (groupid=0, jobs=1): err= 0: pid=182085: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=29, BW=29.4MiB/s (30.8MB/s)(298MiB/10143msec) 00:26:55.347 slat (usec): min=53, max=2116.4k, avg=33749.90, stdev=204105.89 00:26:55.347 clat (msec): min=82, max=9074, avg=2242.81, stdev=2743.45 00:26:55.347 lat (msec): min=157, max=9076, avg=2276.56, stdev=2769.28 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 271], 20.00th=[ 464], 00:26:55.347 | 30.00th=[ 659], 40.00th=[ 802], 50.00th=[ 894], 60.00th=[ 1167], 00:26:55.347 | 70.00th=[ 1586], 80.00th=[ 3239], 90.00th=[ 8792], 95.00th=[ 8926], 00:26:55.347 | 99.00th=[ 8926], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:26:55.347 | 99.99th=[ 9060] 00:26:55.347 bw ( KiB/s): min=47104, max=159744, per=6.29%, avg=116053.33, stdev=60418.89, samples=3 00:26:55.347 iops : min= 46, max= 156, avg=113.33, stdev=59.00, samples=3 00:26:55.347 lat (msec) : 100=0.34%, 250=5.37%, 500=15.44%, 750=10.74%, 1000=22.48% 00:26:55.347 lat (msec) : 2000=16.11%, >=2000=29.53% 00:26:55.347 cpu : usr=0.04%, sys=1.10%, ctx=404, majf=0, minf=32769 00:26:55.347 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.7%, >=64=78.9% 00:26:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.347 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:26:55.347 issued rwts: total=298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job3: (groupid=0, jobs=1): err= 0: pid=182086: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=11, BW=11.2MiB/s (11.7MB/s)(138MiB/12334msec) 00:26:55.347 slat (usec): min=314, max=2068.9k, avg=73812.13, stdev=325754.77 00:26:55.347 clat (msec): min=2146, max=12324, avg=7638.51, stdev=3168.28 00:26:55.347 lat (msec): min=3798, max=12325, avg=7712.32, stdev=3157.97 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 3809], 5.00th=[ 3809], 10.00th=[ 4044], 20.00th=[ 4212], 00:26:55.347 | 30.00th=[ 5873], 40.00th=[ 6141], 50.00th=[ 6342], 60.00th=[ 6477], 00:26:55.347 | 70.00th=[ 8557], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.347 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.347 | 99.99th=[12281] 00:26:55.347 bw ( KiB/s): min= 1950, max=14336, per=0.41%, avg=7476.67, stdev=6299.62, samples=3 00:26:55.347 iops : min= 1, max= 14, avg= 7.00, stdev= 6.56, samples=3 00:26:55.347 lat (msec) : >=2000=100.00% 00:26:55.347 cpu : usr=0.00%, sys=0.74%, ctx=225, majf=0, minf=32769 00:26:55.347 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.6%, 32=23.2%, >=64=54.3% 00:26:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.347 complete : 0=0.0%, 4=91.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=8.3% 00:26:55.347 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job3: (groupid=0, jobs=1): err= 0: pid=182088: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=4, BW=4934KiB/s (5052kB/s)(59.0MiB/12245msec) 00:26:55.347 slat (usec): min=499, max=2059.7k, avg=171347.13, stdev=540926.22 00:26:55.347 clat (msec): min=2134, max=12225, avg=8810.69, stdev=3284.28 00:26:55.347 lat (msec): min=4191, max=12244, avg=8982.03, stdev=3192.42 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4329], 00:26:55.347 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[12013], 00:26:55.347 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:26:55.347 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.347 | 99.99th=[12281] 00:26:55.347 lat (msec) : >=2000=100.00% 00:26:55.347 cpu : usr=0.00%, sys=0.29%, ctx=68, majf=0, minf=15105 00:26:55.347 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:26:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.347 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.347 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job3: (groupid=0, jobs=1): err= 0: pid=182089: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=16, BW=16.6MiB/s (17.4MB/s)(204MiB/12283msec) 00:26:55.347 slat (usec): min=68, max=2076.3k, avg=49825.19, stdev=275267.60 00:26:55.347 clat (msec): min=1082, max=11273, avg=7148.50, stdev=4280.70 00:26:55.347 lat (msec): min=1087, max=11275, avg=7198.33, stdev=4269.90 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 1099], 5.00th=[ 1167], 10.00th=[ 1267], 20.00th=[ 1519], 00:26:55.347 | 30.00th=[ 1703], 40.00th=[ 7080], 50.00th=[10402], 60.00th=[10537], 00:26:55.347 | 70.00th=[10537], 80.00th=[10805], 90.00th=[11073], 95.00th=[11208], 00:26:55.347 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:26:55.347 | 99.99th=[11208] 00:26:55.347 bw ( KiB/s): min= 2043, max=130810, per=1.42%, avg=26235.33, stdev=51273.34, samples=6 00:26:55.347 iops : min= 1, max= 127, avg=25.17, stdev=49.93, samples=6 00:26:55.347 lat (msec) : 2000=31.37%, >=2000=68.63% 00:26:55.347 cpu : usr=0.00%, sys=0.72%, ctx=347, majf=0, minf=32769 00:26:55.347 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:26:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.347 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:26:55.347 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job3: (groupid=0, jobs=1): err= 0: pid=182090: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=6, BW=6575KiB/s (6733kB/s)(79.0MiB/12303msec) 00:26:55.347 slat (usec): min=445, max=2047.6k, avg=128910.26, stdev=471774.19 00:26:55.347 clat (msec): min=2118, max=12301, avg=9749.60, stdev=3213.27 00:26:55.347 lat (msec): min=4165, max=12302, avg=9878.51, stdev=3105.67 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:26:55.347 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12147], 60.00th=[12147], 00:26:55.347 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.347 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.347 | 99.99th=[12281] 00:26:55.347 lat (msec) : >=2000=100.00% 00:26:55.347 cpu : usr=0.00%, sys=0.50%, ctx=99, majf=0, minf=20225 00:26:55.347 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.1%, 16=20.3%, 32=40.5%, >=64=20.3% 00:26:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.347 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:55.347 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.347 job3: (groupid=0, jobs=1): err= 0: pid=182091: Wed Jul 24 14:25:22 2024 00:26:55.347 read: IOPS=83, BW=83.2MiB/s (87.2MB/s)(1019MiB/12253msec) 00:26:55.347 slat (usec): min=42, max=2056.2k, avg=9938.60, stdev=106317.66 00:26:55.347 clat (msec): min=132, max=6353, avg=1408.43, stdev=1689.48 00:26:55.347 lat (msec): min=134, max=6376, avg=1418.37, stdev=1697.89 00:26:55.347 clat percentiles (msec): 00:26:55.347 | 1.00th=[ 136], 5.00th=[ 138], 10.00th=[ 169], 20.00th=[ 271], 00:26:55.347 | 30.00th=[ 288], 40.00th=[ 368], 50.00th=[ 514], 60.00th=[ 575], 00:26:55.348 | 70.00th=[ 1070], 80.00th=[ 3943], 90.00th=[ 4597], 95.00th=[ 4597], 00:26:55.348 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 6342], 00:26:55.348 | 99.99th=[ 6342] 00:26:55.348 bw ( KiB/s): min= 1418, max=539592, per=11.00%, avg=202787.67, stdev=181706.22, samples=9 00:26:55.348 iops : min= 1, max= 526, avg=197.78, stdev=177.41, samples=9 00:26:55.348 lat (msec) : 250=16.39%, 500=31.99%, 750=17.27%, 1000=3.04%, 2000=6.08% 00:26:55.348 lat (msec) : >=2000=25.22% 00:26:55.348 cpu : usr=0.02%, sys=1.16%, ctx=1071, majf=0, minf=32769 00:26:55.348 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.348 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job3: (groupid=0, jobs=1): err= 0: pid=182092: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=23, BW=23.7MiB/s (24.9MB/s)(291MiB/12268msec) 00:26:55.348 slat (usec): min=57, max=2081.1k, avg=34881.85, stdev=237381.22 00:26:55.348 clat (msec): min=533, max=11041, avg=5084.39, stdev=4796.61 00:26:55.348 lat (msec): min=535, max=11042, avg=5119.27, stdev=4802.51 00:26:55.348 clat percentiles (msec): 00:26:55.348 | 1.00th=[ 535], 5.00th=[ 567], 10.00th=[ 575], 20.00th=[ 634], 00:26:55.348 | 30.00th=[ 676], 40.00th=[ 726], 50.00th=[ 1062], 60.00th=[ 8490], 00:26:55.348 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:26:55.348 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:26:55.348 | 99.99th=[11073] 00:26:55.348 bw ( KiB/s): min= 1396, max=225280, per=2.60%, avg=47886.71, stdev=82525.09, samples=7 00:26:55.348 iops : min= 1, max= 220, avg=46.57, stdev=80.71, samples=7 00:26:55.348 lat (msec) : 750=41.24%, 1000=5.50%, 2000=3.78%, >=2000=49.48% 00:26:55.348 cpu : usr=0.01%, sys=0.68%, ctx=307, majf=0, minf=32769 00:26:55.348 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:26:55.348 issued rwts: total=291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job3: (groupid=0, jobs=1): err= 0: pid=182093: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=3, BW=3690KiB/s (3779kB/s)(44.0MiB/12209msec) 00:26:55.348 slat (usec): min=475, max=2079.0k, avg=229390.07, stdev=611393.26 00:26:55.348 clat (msec): min=2115, max=12208, avg=9646.26, stdev=3031.66 00:26:55.348 lat (msec): min=4194, max=12208, avg=9875.65, stdev=2823.27 00:26:55.348 clat percentiles (msec): 00:26:55.348 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6342], 00:26:55.348 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12013], 00:26:55.348 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:26:55.348 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:26:55.348 | 99.99th=[12147] 00:26:55.348 lat (msec) : >=2000=100.00% 00:26:55.348 cpu : usr=0.00%, sys=0.18%, ctx=70, majf=0, minf=11265 00:26:55.348 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.348 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job3: (groupid=0, jobs=1): err= 0: pid=182095: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=109, BW=110MiB/s (115MB/s)(1107MiB/10099msec) 00:26:55.348 slat (usec): min=58, max=2057.9k, avg=9029.10, stdev=73970.15 00:26:55.348 clat (msec): min=95, max=3023, avg=1035.56, stdev=760.08 00:26:55.348 lat (msec): min=104, max=3030, avg=1044.59, stdev=762.89 00:26:55.348 clat percentiles (msec): 00:26:55.348 | 1.00th=[ 178], 5.00th=[ 460], 10.00th=[ 567], 20.00th=[ 584], 00:26:55.348 | 30.00th=[ 701], 40.00th=[ 718], 50.00th=[ 751], 60.00th=[ 810], 00:26:55.348 | 70.00th=[ 877], 80.00th=[ 961], 90.00th=[ 2802], 95.00th=[ 2937], 00:26:55.348 | 99.00th=[ 3004], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3037], 00:26:55.348 | 99.99th=[ 3037] 00:26:55.348 bw ( KiB/s): min=55296, max=229376, per=8.37%, avg=154387.69, stdev=49686.26, samples=13 00:26:55.348 iops : min= 54, max= 224, avg=150.77, stdev=48.52, samples=13 00:26:55.348 lat (msec) : 100=0.09%, 250=1.81%, 500=3.97%, 750=43.72%, 1000=33.79% 00:26:55.348 lat (msec) : 2000=0.18%, >=2000=16.44% 00:26:55.348 cpu : usr=0.08%, sys=1.56%, ctx=1033, majf=0, minf=32769 00:26:55.348 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.348 issued rwts: total=1107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job3: (groupid=0, jobs=1): err= 0: pid=182096: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=41, BW=41.8MiB/s (43.8MB/s)(512MiB/12263msec) 00:26:55.348 slat (usec): min=49, max=2057.7k, avg=19751.17, stdev=164675.31 00:26:55.348 clat (msec): min=109, max=8496, avg=2032.87, stdev=2321.54 00:26:55.348 lat (msec): min=110, max=8496, avg=2052.62, stdev=2343.88 00:26:55.348 clat percentiles (msec): 00:26:55.348 | 1.00th=[ 112], 5.00th=[ 123], 10.00th=[ 123], 20.00th=[ 124], 00:26:55.348 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 2366], 00:26:55.348 | 70.00th=[ 3339], 80.00th=[ 4329], 90.00th=[ 5940], 95.00th=[ 6007], 00:26:55.348 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:26:55.348 | 99.99th=[ 8490] 00:26:55.348 bw ( KiB/s): min= 1454, max=583680, per=6.10%, avg=112555.14, stdev=209649.92, samples=7 00:26:55.348 iops : min= 1, max= 570, avg=109.86, stdev=204.77, samples=7 00:26:55.348 lat (msec) : 250=53.12%, 2000=2.34%, >=2000=44.53% 00:26:55.348 cpu : usr=0.04%, sys=0.88%, ctx=532, majf=0, minf=32769 00:26:55.348 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:55.348 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job3: (groupid=0, jobs=1): err= 0: pid=182097: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=2, BW=2857KiB/s (2925kB/s)(34.0MiB/12188msec) 00:26:55.348 slat (usec): min=537, max=2059.5k, avg=295439.09, stdev=687322.37 00:26:55.348 clat (msec): min=2141, max=12186, avg=9249.56, stdev=3141.94 00:26:55.348 lat (msec): min=4201, max=12187, avg=9545.00, stdev=2917.62 00:26:55.348 clat percentiles (msec): 00:26:55.348 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:26:55.348 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[12013], 00:26:55.348 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:26:55.348 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:26:55.348 | 99.99th=[12147] 00:26:55.348 lat (msec) : >=2000=100.00% 00:26:55.348 cpu : usr=0.01%, sys=0.14%, ctx=59, majf=0, minf=8705 00:26:55.348 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.348 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job3: (groupid=0, jobs=1): err= 0: pid=182098: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=3, BW=3273KiB/s (3352kB/s)(39.0MiB/12201msec) 00:26:55.348 slat (usec): min=560, max=2038.5k, avg=258767.18, stdev=644157.32 00:26:55.348 clat (msec): min=2108, max=12190, avg=7315.21, stdev=2832.68 00:26:55.348 lat (msec): min=4139, max=12200, avg=7573.98, stdev=2805.37 00:26:55.348 clat percentiles (msec): 00:26:55.348 | 1.00th=[ 2106], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 4245], 00:26:55.348 | 30.00th=[ 6275], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:26:55.348 | 70.00th=[ 8490], 80.00th=[10671], 90.00th=[12147], 95.00th=[12147], 00:26:55.348 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:26:55.348 | 99.99th=[12147] 00:26:55.348 lat (msec) : >=2000=100.00% 00:26:55.348 cpu : usr=0.00%, sys=0.16%, ctx=69, majf=0, minf=9985 00:26:55.348 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.348 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job4: (groupid=0, jobs=1): err= 0: pid=182136: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=4, BW=4883KiB/s (5001kB/s)(58.0MiB/12162msec) 00:26:55.348 slat (usec): min=430, max=2047.4k, avg=173079.22, stdev=541205.10 00:26:55.348 clat (msec): min=2122, max=12159, avg=8066.23, stdev=3067.15 00:26:55.348 lat (msec): min=4149, max=12161, avg=8239.31, stdev=3008.57 00:26:55.348 clat percentiles (msec): 00:26:55.348 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4279], 00:26:55.348 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8423], 60.00th=[ 8490], 00:26:55.348 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:26:55.348 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:26:55.348 | 99.99th=[12147] 00:26:55.348 lat (msec) : >=2000=100.00% 00:26:55.348 cpu : usr=0.00%, sys=0.24%, ctx=68, majf=0, minf=14849 00:26:55.348 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:26:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.348 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.348 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.348 job4: (groupid=0, jobs=1): err= 0: pid=182137: Wed Jul 24 14:25:22 2024 00:26:55.348 read: IOPS=6, BW=6537KiB/s (6694kB/s)(78.0MiB/12219msec) 00:26:55.348 slat (usec): min=668, max=2022.8k, avg=129460.06, stdev=467365.93 00:26:55.348 clat (msec): min=2120, max=12216, avg=8859.49, stdev=3114.95 00:26:55.348 lat (msec): min=4126, max=12218, avg=8988.95, stdev=3040.18 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6275], 00:26:55.349 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:26:55.349 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:26:55.349 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.349 | 99.99th=[12281] 00:26:55.349 lat (msec) : >=2000=100.00% 00:26:55.349 cpu : usr=0.00%, sys=0.51%, ctx=83, majf=0, minf=19969 00:26:55.349 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:55.349 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.349 job4: (groupid=0, jobs=1): err= 0: pid=182138: Wed Jul 24 14:25:22 2024 00:26:55.349 read: IOPS=29, BW=29.4MiB/s (30.8MB/s)(361MiB/12283msec) 00:26:55.349 slat (usec): min=44, max=2105.3k, avg=28193.28, stdev=201853.60 00:26:55.349 clat (msec): min=380, max=6588, avg=2926.33, stdev=2409.66 00:26:55.349 lat (msec): min=380, max=6591, avg=2954.52, stdev=2415.54 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 384], 5.00th=[ 430], 10.00th=[ 430], 20.00th=[ 460], 00:26:55.349 | 30.00th=[ 600], 40.00th=[ 827], 50.00th=[ 1234], 60.00th=[ 5000], 00:26:55.349 | 70.00th=[ 5134], 80.00th=[ 5269], 90.00th=[ 6544], 95.00th=[ 6544], 00:26:55.349 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:26:55.349 | 99.99th=[ 6611] 00:26:55.349 bw ( KiB/s): min= 1973, max=270336, per=5.20%, avg=95831.40, stdev=116799.56, samples=5 00:26:55.349 iops : min= 1, max= 264, avg=93.40, stdev=114.25, samples=5 00:26:55.349 lat (msec) : 500=24.65%, 750=12.74%, 1000=6.65%, 2000=6.09%, >=2000=49.86% 00:26:55.349 cpu : usr=0.00%, sys=0.91%, ctx=344, majf=0, minf=32769 00:26:55.349 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.9%, >=64=82.5% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:26:55.349 issued rwts: total=361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.349 job4: (groupid=0, jobs=1): err= 0: pid=182139: Wed Jul 24 14:25:22 2024 00:26:55.349 read: IOPS=9, BW=9627KiB/s (9858kB/s)(115MiB/12232msec) 00:26:55.349 slat (usec): min=457, max=2035.8k, avg=87904.20, stdev=375946.68 00:26:55.349 clat (msec): min=2121, max=12229, avg=9964.12, stdev=2452.42 00:26:55.349 lat (msec): min=4144, max=12231, avg=10052.03, stdev=2347.80 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 4144], 5.00th=[ 4212], 10.00th=[ 6275], 20.00th=[ 8490], 00:26:55.349 | 30.00th=[10402], 40.00th=[10402], 50.00th=[10537], 60.00th=[10671], 00:26:55.349 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12281], 00:26:55.349 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.349 | 99.99th=[12281] 00:26:55.349 lat (msec) : >=2000=100.00% 00:26:55.349 cpu : usr=0.00%, sys=0.55%, ctx=129, majf=0, minf=29441 00:26:55.349 IO depths : 1=0.9%, 2=1.7%, 4=3.5%, 8=7.0%, 16=13.9%, 32=27.8%, >=64=45.2% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:55.349 issued rwts: total=115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.349 job4: (groupid=0, jobs=1): err= 0: pid=182140: Wed Jul 24 14:25:22 2024 00:26:55.349 read: IOPS=36, BW=36.8MiB/s (38.6MB/s)(452MiB/12293msec) 00:26:55.349 slat (usec): min=48, max=2046.8k, avg=22510.61, stdev=168753.98 00:26:55.349 clat (msec): min=347, max=10615, avg=3294.39, stdev=3356.14 00:26:55.349 lat (msec): min=350, max=10738, avg=3316.90, stdev=3367.53 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 363], 5.00th=[ 388], 10.00th=[ 527], 20.00th=[ 869], 00:26:55.349 | 30.00th=[ 944], 40.00th=[ 1003], 50.00th=[ 1053], 60.00th=[ 3641], 00:26:55.349 | 70.00th=[ 3876], 80.00th=[ 6141], 90.00th=[ 9866], 95.00th=[ 9866], 00:26:55.349 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[10671], 99.95th=[10671], 00:26:55.349 | 99.99th=[10671] 00:26:55.349 bw ( KiB/s): min= 1984, max=309248, per=3.61%, avg=66535.20, stdev=96447.61, samples=10 00:26:55.349 iops : min= 1, max= 302, avg=64.80, stdev=94.23, samples=10 00:26:55.349 lat (msec) : 500=9.07%, 750=6.64%, 1000=24.78%, 2000=19.03%, >=2000=40.49% 00:26:55.349 cpu : usr=0.02%, sys=1.07%, ctx=649, majf=0, minf=32769 00:26:55.349 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:55.349 issued rwts: total=452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.349 job4: (groupid=0, jobs=1): err= 0: pid=182141: Wed Jul 24 14:25:22 2024 00:26:55.349 read: IOPS=3, BW=3604KiB/s (3690kB/s)(43.0MiB/12218msec) 00:26:55.349 slat (usec): min=436, max=2047.6k, avg=234572.27, stdev=613502.84 00:26:55.349 clat (msec): min=2130, max=12214, avg=8784.54, stdev=3176.05 00:26:55.349 lat (msec): min=4167, max=12217, avg=9019.12, stdev=3042.62 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:26:55.349 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10537], 00:26:55.349 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:26:55.349 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.349 | 99.99th=[12281] 00:26:55.349 lat (msec) : >=2000=100.00% 00:26:55.349 cpu : usr=0.00%, sys=0.23%, ctx=65, majf=0, minf=11009 00:26:55.349 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.349 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.349 job4: (groupid=0, jobs=1): err= 0: pid=182143: Wed Jul 24 14:25:22 2024 00:26:55.349 read: IOPS=82, BW=82.4MiB/s (86.4MB/s)(1005MiB/12200msec) 00:26:55.349 slat (usec): min=43, max=2067.9k, avg=10032.62, stdev=110271.84 00:26:55.349 clat (msec): min=120, max=8895, avg=1454.20, stdev=2687.37 00:26:55.349 lat (msec): min=121, max=8897, avg=1464.24, stdev=2696.68 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 124], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 136], 00:26:55.349 | 30.00th=[ 138], 40.00th=[ 230], 50.00th=[ 284], 60.00th=[ 439], 00:26:55.349 | 70.00th=[ 835], 80.00th=[ 1167], 90.00th=[ 8423], 95.00th=[ 8658], 00:26:55.349 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:26:55.349 | 99.99th=[ 8926] 00:26:55.349 bw ( KiB/s): min= 1519, max=831488, per=9.75%, avg=179728.30, stdev=260502.05, samples=10 00:26:55.349 iops : min= 1, max= 812, avg=175.30, stdev=254.50, samples=10 00:26:55.349 lat (msec) : 250=41.59%, 500=19.90%, 750=7.26%, 1000=2.79%, 2000=14.63% 00:26:55.349 lat (msec) : >=2000=13.83% 00:26:55.349 cpu : usr=0.07%, sys=1.32%, ctx=1097, majf=0, minf=32769 00:26:55.349 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.349 issued rwts: total=1005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.349 job4: (groupid=0, jobs=1): err= 0: pid=182144: Wed Jul 24 14:25:22 2024 00:26:55.349 read: IOPS=28, BW=28.1MiB/s (29.4MB/s)(344MiB/12254msec) 00:26:55.349 slat (usec): min=63, max=2144.1k, avg=29514.06, stdev=203495.76 00:26:55.349 clat (msec): min=660, max=10935, avg=4392.47, stdev=4384.40 00:26:55.349 lat (msec): min=665, max=10936, avg=4421.98, stdev=4394.13 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 667], 5.00th=[ 684], 10.00th=[ 709], 20.00th=[ 751], 00:26:55.349 | 30.00th=[ 902], 40.00th=[ 1070], 50.00th=[ 1217], 60.00th=[ 2500], 00:26:55.349 | 70.00th=[ 9866], 80.00th=[10134], 90.00th=[10537], 95.00th=[10671], 00:26:55.349 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:26:55.349 | 99.99th=[10939] 00:26:55.349 bw ( KiB/s): min= 2052, max=114688, per=2.68%, avg=49362.11, stdev=52040.58, samples=9 00:26:55.349 iops : min= 2, max= 112, avg=48.11, stdev=50.73, samples=9 00:26:55.349 lat (msec) : 750=19.19%, 1000=13.95%, 2000=25.87%, >=2000=40.99% 00:26:55.349 cpu : usr=0.02%, sys=0.93%, ctx=471, majf=0, minf=32769 00:26:55.349 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.3%, >=64=81.7% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:26:55.349 issued rwts: total=344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.349 job4: (groupid=0, jobs=1): err= 0: pid=182145: Wed Jul 24 14:25:22 2024 00:26:55.349 read: IOPS=5, BW=5257KiB/s (5383kB/s)(63.0MiB/12271msec) 00:26:55.349 slat (usec): min=409, max=2127.7k, avg=161316.54, stdev=515583.88 00:26:55.349 clat (msec): min=2107, max=12269, avg=8711.48, stdev=2784.93 00:26:55.349 lat (msec): min=4078, max=12270, avg=8872.79, stdev=2688.91 00:26:55.349 clat percentiles (msec): 00:26:55.349 | 1.00th=[ 2106], 5.00th=[ 4329], 10.00th=[ 6208], 20.00th=[ 6275], 00:26:55.349 | 30.00th=[ 6275], 40.00th=[ 8423], 50.00th=[ 8423], 60.00th=[ 8557], 00:26:55.349 | 70.00th=[12013], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:26:55.349 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.349 | 99.99th=[12281] 00:26:55.349 lat (msec) : >=2000=100.00% 00:26:55.349 cpu : usr=0.00%, sys=0.30%, ctx=85, majf=0, minf=16129 00:26:55.349 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:26:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.349 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.349 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job4: (groupid=0, jobs=1): err= 0: pid=182146: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=55, BW=55.6MiB/s (58.3MB/s)(566MiB/10183msec) 00:26:55.350 slat (usec): min=44, max=2020.5k, avg=17690.74, stdev=134640.20 00:26:55.350 clat (msec): min=166, max=8040, avg=1600.65, stdev=1510.09 00:26:55.350 lat (msec): min=214, max=8088, avg=1618.34, stdev=1531.08 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 234], 5.00th=[ 418], 10.00th=[ 422], 20.00th=[ 430], 00:26:55.350 | 30.00th=[ 464], 40.00th=[ 535], 50.00th=[ 634], 60.00th=[ 1200], 00:26:55.350 | 70.00th=[ 1989], 80.00th=[ 3440], 90.00th=[ 4044], 95.00th=[ 4245], 00:26:55.350 | 99.00th=[ 4463], 99.50th=[ 8020], 99.90th=[ 8020], 99.95th=[ 8020], 00:26:55.350 | 99.99th=[ 8020] 00:26:55.350 bw ( KiB/s): min=16384, max=311296, per=5.40%, avg=99603.44, stdev=92455.05, samples=9 00:26:55.350 iops : min= 16, max= 304, avg=97.11, stdev=90.20, samples=9 00:26:55.350 lat (msec) : 250=1.77%, 500=32.33%, 750=19.08%, 1000=3.18%, 2000=13.78% 00:26:55.350 lat (msec) : >=2000=29.86% 00:26:55.350 cpu : usr=0.04%, sys=1.10%, ctx=661, majf=0, minf=32769 00:26:55.350 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.9% 00:26:55.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.350 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.350 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job4: (groupid=0, jobs=1): err= 0: pid=182147: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=48, BW=48.1MiB/s (50.5MB/s)(491MiB/10204msec) 00:26:55.350 slat (usec): min=44, max=2138.1k, avg=20441.52, stdev=149390.61 00:26:55.350 clat (msec): min=163, max=7085, avg=2539.04, stdev=2331.92 00:26:55.350 lat (msec): min=247, max=7087, avg=2559.49, stdev=2339.47 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 257], 5.00th=[ 451], 10.00th=[ 659], 20.00th=[ 877], 00:26:55.350 | 30.00th=[ 1200], 40.00th=[ 1284], 50.00th=[ 1334], 60.00th=[ 1469], 00:26:55.350 | 70.00th=[ 2056], 80.00th=[ 6208], 90.00th=[ 6678], 95.00th=[ 6946], 00:26:55.350 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:26:55.350 | 99.99th=[ 7080] 00:26:55.350 bw ( KiB/s): min=10240, max=143360, per=3.67%, avg=67584.00, stdev=46764.41, samples=11 00:26:55.350 iops : min= 10, max= 140, avg=66.00, stdev=45.67, samples=11 00:26:55.350 lat (msec) : 250=0.41%, 500=4.89%, 750=8.15%, 1000=12.22%, 2000=43.58% 00:26:55.350 lat (msec) : >=2000=30.75% 00:26:55.350 cpu : usr=0.03%, sys=1.37%, ctx=654, majf=0, minf=32769 00:26:55.350 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.2% 00:26:55.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.350 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:55.350 issued rwts: total=491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job4: (groupid=0, jobs=1): err= 0: pid=182148: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=7, BW=7189KiB/s (7362kB/s)(86.0MiB/12249msec) 00:26:55.350 slat (usec): min=473, max=2023.4k, avg=117639.01, stdev=443930.81 00:26:55.350 clat (msec): min=2130, max=12245, avg=9360.23, stdev=3094.61 00:26:55.350 lat (msec): min=4154, max=12248, avg=9477.87, stdev=3007.63 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6342], 00:26:55.350 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:26:55.350 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:26:55.350 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:26:55.350 | 99.99th=[12281] 00:26:55.350 lat (msec) : >=2000=100.00% 00:26:55.350 cpu : usr=0.00%, sys=0.47%, ctx=101, majf=0, minf=22017 00:26:55.350 IO depths : 1=1.2%, 2=2.3%, 4=4.7%, 8=9.3%, 16=18.6%, 32=37.2%, >=64=26.7% 00:26:55.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.350 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:55.350 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job4: (groupid=0, jobs=1): err= 0: pid=182150: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=3, BW=3774KiB/s (3865kB/s)(45.0MiB/12209msec) 00:26:55.350 slat (usec): min=463, max=2145.5k, avg=224214.20, stdev=609648.21 00:26:55.350 clat (msec): min=2119, max=12197, avg=9017.57, stdev=2970.52 00:26:55.350 lat (msec): min=4264, max=12208, avg=9241.79, stdev=2814.69 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 4279], 20.00th=[ 6342], 00:26:55.350 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:26:55.350 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:26:55.350 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:26:55.350 | 99.99th=[12147] 00:26:55.350 lat (msec) : >=2000=100.00% 00:26:55.350 cpu : usr=0.00%, sys=0.21%, ctx=65, majf=0, minf=11521 00:26:55.350 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:26:55.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.350 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:26:55.350 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job5: (groupid=0, jobs=1): err= 0: pid=182167: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=17, BW=18.0MiB/s (18.8MB/s)(220MiB/12245msec) 00:26:55.350 slat (usec): min=68, max=2015.5k, avg=45866.67, stdev=277410.57 00:26:55.350 clat (msec): min=228, max=12208, avg=6933.99, stdev=4354.98 00:26:55.350 lat (msec): min=229, max=12210, avg=6979.85, stdev=4355.46 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 230], 5.00th=[ 232], 10.00th=[ 359], 20.00th=[ 1318], 00:26:55.350 | 30.00th=[ 4178], 40.00th=[ 5537], 50.00th=[ 6477], 60.00th=[ 8557], 00:26:55.350 | 70.00th=[11879], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:26:55.350 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:26:55.350 | 99.99th=[12147] 00:26:55.350 bw ( KiB/s): min= 2052, max=67584, per=1.29%, avg=23809.62, stdev=21335.93, samples=8 00:26:55.350 iops : min= 2, max= 66, avg=23.00, stdev=20.79, samples=8 00:26:55.350 lat (msec) : 250=6.36%, 500=8.64%, 2000=6.36%, >=2000=78.64% 00:26:55.350 cpu : usr=0.03%, sys=0.79%, ctx=168, majf=0, minf=32769 00:26:55.350 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.3%, 32=14.5%, >=64=71.4% 00:26:55.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.350 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:26:55.350 issued rwts: total=220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job5: (groupid=0, jobs=1): err= 0: pid=182169: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=69, BW=69.2MiB/s (72.6MB/s)(842MiB/12159msec) 00:26:55.350 slat (usec): min=43, max=2009.0k, avg=11879.64, stdev=128427.60 00:26:55.350 clat (msec): min=135, max=6981, avg=1278.22, stdev=2262.30 00:26:55.350 lat (msec): min=135, max=6984, avg=1290.10, stdev=2271.69 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 142], 5.00th=[ 144], 10.00th=[ 144], 20.00th=[ 146], 00:26:55.350 | 30.00th=[ 174], 40.00th=[ 224], 50.00th=[ 268], 60.00th=[ 275], 00:26:55.350 | 70.00th=[ 518], 80.00th=[ 684], 90.00th=[ 6544], 95.00th=[ 6745], 00:26:55.350 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 7013], 99.95th=[ 7013], 00:26:55.350 | 99.99th=[ 7013] 00:26:55.350 bw ( KiB/s): min= 2048, max=747520, per=9.93%, avg=183040.00, stdev=266064.26, samples=8 00:26:55.350 iops : min= 2, max= 730, avg=178.75, stdev=259.83, samples=8 00:26:55.350 lat (msec) : 250=44.54%, 500=24.82%, 750=13.30%, 1000=0.12%, >=2000=17.22% 00:26:55.350 cpu : usr=0.03%, sys=0.95%, ctx=1200, majf=0, minf=32769 00:26:55.350 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:26:55.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.350 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.350 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job5: (groupid=0, jobs=1): err= 0: pid=182170: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=10, BW=11.0MiB/s (11.5MB/s)(112MiB/10205msec) 00:26:55.350 slat (usec): min=405, max=2017.4k, avg=89986.46, stdev=388698.43 00:26:55.350 clat (msec): min=125, max=10203, avg=5763.65, stdev=2791.53 00:26:55.350 lat (msec): min=2133, max=10204, avg=5853.64, stdev=2770.51 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 2140], 5.00th=[ 2299], 10.00th=[ 2299], 20.00th=[ 4212], 00:26:55.350 | 30.00th=[ 4245], 40.00th=[ 4245], 50.00th=[ 4329], 60.00th=[ 6409], 00:26:55.350 | 70.00th=[ 6544], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:26:55.350 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:26:55.350 | 99.99th=[10268] 00:26:55.350 lat (msec) : 250=0.89%, >=2000=99.11% 00:26:55.350 cpu : usr=0.02%, sys=0.57%, ctx=106, majf=0, minf=28673 00:26:55.350 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.1%, 16=14.3%, 32=28.6%, >=64=43.8% 00:26:55.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.350 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:26:55.350 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.350 job5: (groupid=0, jobs=1): err= 0: pid=182172: Wed Jul 24 14:25:22 2024 00:26:55.350 read: IOPS=80, BW=80.2MiB/s (84.1MB/s)(823MiB/10267msec) 00:26:55.350 slat (usec): min=52, max=2115.0k, avg=12307.87, stdev=133297.42 00:26:55.350 clat (msec): min=130, max=6218, avg=1045.81, stdev=1508.02 00:26:55.350 lat (msec): min=257, max=6221, avg=1058.12, stdev=1518.58 00:26:55.350 clat percentiles (msec): 00:26:55.350 | 1.00th=[ 264], 5.00th=[ 279], 10.00th=[ 284], 20.00th=[ 292], 00:26:55.350 | 30.00th=[ 309], 40.00th=[ 347], 50.00th=[ 368], 60.00th=[ 405], 00:26:55.350 | 70.00th=[ 439], 80.00th=[ 2299], 90.00th=[ 2500], 95.00th=[ 6141], 00:26:55.350 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:26:55.350 | 99.99th=[ 6208] 00:26:55.350 bw ( KiB/s): min= 2048, max=466944, per=15.44%, avg=284672.00, stdev=173869.05, samples=5 00:26:55.350 iops : min= 2, max= 456, avg=278.00, stdev=169.79, samples=5 00:26:55.351 lat (msec) : 250=0.12%, 500=76.91%, >=2000=22.96% 00:26:55.351 cpu : usr=0.08%, sys=1.58%, ctx=754, majf=0, minf=32769 00:26:55.351 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:26:55.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.351 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.351 issued rwts: total=823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.351 job5: (groupid=0, jobs=1): err= 0: pid=182173: Wed Jul 24 14:25:22 2024 00:26:55.351 read: IOPS=17, BW=17.2MiB/s (18.0MB/s)(175MiB/10194msec) 00:26:55.351 slat (usec): min=412, max=2165.1k, avg=57552.54, stdev=299541.94 00:26:55.351 clat (msec): min=121, max=6469, avg=4119.95, stdev=1190.03 00:26:55.351 lat (msec): min=670, max=6470, avg=4177.50, stdev=1170.87 00:26:55.351 clat percentiles (msec): 00:26:55.351 | 1.00th=[ 667], 5.00th=[ 2299], 10.00th=[ 3608], 20.00th=[ 3675], 00:26:55.351 | 30.00th=[ 3775], 40.00th=[ 3842], 50.00th=[ 3943], 60.00th=[ 4010], 00:26:55.351 | 70.00th=[ 4144], 80.00th=[ 4396], 90.00th=[ 6342], 95.00th=[ 6409], 00:26:55.351 | 99.00th=[ 6477], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:26:55.351 | 99.99th=[ 6477] 00:26:55.351 bw ( KiB/s): min= 2048, max=49152, per=1.74%, avg=32085.33, stdev=26093.59, samples=3 00:26:55.351 iops : min= 2, max= 48, avg=31.33, stdev=25.48, samples=3 00:26:55.351 lat (msec) : 250=0.57%, 750=3.43%, 2000=0.57%, >=2000=95.43% 00:26:55.351 cpu : usr=0.00%, sys=0.98%, ctx=297, majf=0, minf=32769 00:26:55.351 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.1%, 32=18.3%, >=64=64.0% 00:26:55.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.351 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:26:55.351 issued rwts: total=175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.351 job5: (groupid=0, jobs=1): err= 0: pid=182174: Wed Jul 24 14:25:22 2024 00:26:55.351 read: IOPS=68, BW=68.1MiB/s (71.4MB/s)(684MiB/10045msec) 00:26:55.351 slat (usec): min=69, max=2009.0k, avg=14626.55, stdev=109910.23 00:26:55.351 clat (msec): min=35, max=3935, avg=1792.48, stdev=1263.33 00:26:55.351 lat (msec): min=44, max=3937, avg=1807.11, stdev=1265.78 00:26:55.351 clat percentiles (msec): 00:26:55.351 | 1.00th=[ 96], 5.00th=[ 388], 10.00th=[ 642], 20.00th=[ 693], 00:26:55.351 | 30.00th=[ 776], 40.00th=[ 894], 50.00th=[ 1053], 60.00th=[ 1687], 00:26:55.351 | 70.00th=[ 3171], 80.00th=[ 3373], 90.00th=[ 3507], 95.00th=[ 3675], 00:26:55.351 | 99.00th=[ 3876], 99.50th=[ 3943], 99.90th=[ 3943], 99.95th=[ 3943], 00:26:55.351 | 99.99th=[ 3943] 00:26:55.351 bw ( KiB/s): min= 4096, max=190464, per=4.75%, avg=87577.23, stdev=49927.80, samples=13 00:26:55.351 iops : min= 4, max= 186, avg=85.46, stdev=48.75, samples=13 00:26:55.351 lat (msec) : 50=0.44%, 100=1.17%, 250=1.90%, 500=3.22%, 750=21.78% 00:26:55.351 lat (msec) : 1000=17.40%, 2000=16.81%, >=2000=37.28% 00:26:55.351 cpu : usr=0.04%, sys=1.17%, ctx=1063, majf=0, minf=32769 00:26:55.351 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:26:55.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.351 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.351 issued rwts: total=684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.351 job5: (groupid=0, jobs=1): err= 0: pid=182175: Wed Jul 24 14:25:22 2024 00:26:55.351 read: IOPS=241, BW=242MiB/s (254MB/s)(2459MiB/10171msec) 00:26:55.351 slat (usec): min=55, max=2104.7k, avg=4072.10, stdev=64478.38 00:26:55.351 clat (msec): min=133, max=2543, avg=447.15, stdev=701.53 00:26:55.351 lat (msec): min=134, max=2545, avg=451.22, stdev=704.57 00:26:55.351 clat percentiles (msec): 00:26:55.351 | 1.00th=[ 134], 5.00th=[ 136], 10.00th=[ 136], 20.00th=[ 136], 00:26:55.351 | 30.00th=[ 138], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 144], 00:26:55.351 | 70.00th=[ 288], 80.00th=[ 426], 90.00th=[ 2299], 95.00th=[ 2433], 00:26:55.351 | 99.00th=[ 2534], 99.50th=[ 2534], 99.90th=[ 2534], 99.95th=[ 2534], 00:26:55.351 | 99.99th=[ 2534] 00:26:55.351 bw ( KiB/s): min=94208, max=954368, per=28.76%, avg=530221.67, stdev=359455.83, samples=9 00:26:55.351 iops : min= 92, max= 932, avg=517.78, stdev=351.01, samples=9 00:26:55.351 lat (msec) : 250=67.47%, 500=14.84%, 750=6.63%, 2000=0.12%, >=2000=10.94% 00:26:55.351 cpu : usr=0.09%, sys=2.08%, ctx=2294, majf=0, minf=32769 00:26:55.351 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:55.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.351 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.351 job5: (groupid=0, jobs=1): err= 0: pid=182176: Wed Jul 24 14:25:22 2024 00:26:55.351 read: IOPS=46, BW=46.4MiB/s (48.7MB/s)(469MiB/10098msec) 00:26:55.351 slat (usec): min=43, max=2038.8k, avg=21390.74, stdev=157286.46 00:26:55.351 clat (msec): min=61, max=6625, avg=1597.76, stdev=1376.05 00:26:55.351 lat (msec): min=135, max=6639, avg=1619.15, stdev=1403.35 00:26:55.351 clat percentiles (msec): 00:26:55.351 | 1.00th=[ 138], 5.00th=[ 239], 10.00th=[ 426], 20.00th=[ 726], 00:26:55.351 | 30.00th=[ 802], 40.00th=[ 844], 50.00th=[ 894], 60.00th=[ 961], 00:26:55.351 | 70.00th=[ 2333], 80.00th=[ 2467], 90.00th=[ 4111], 95.00th=[ 4212], 00:26:55.351 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 6611], 99.95th=[ 6611], 00:26:55.351 | 99.99th=[ 6611] 00:26:55.351 bw ( KiB/s): min=61440, max=157696, per=6.31%, avg=116394.67, stdev=41839.73, samples=6 00:26:55.351 iops : min= 60, max= 154, avg=113.67, stdev=40.86, samples=6 00:26:55.351 lat (msec) : 100=0.21%, 250=6.18%, 500=6.82%, 750=9.81%, 1000=41.58% 00:26:55.351 lat (msec) : 2000=1.07%, >=2000=34.33% 00:26:55.351 cpu : usr=0.00%, sys=0.98%, ctx=385, majf=0, minf=32769 00:26:55.351 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:26:55.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.351 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:55.351 issued rwts: total=469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.351 job5: (groupid=0, jobs=1): err= 0: pid=182177: Wed Jul 24 14:25:22 2024 00:26:55.351 read: IOPS=61, BW=61.9MiB/s (64.9MB/s)(754MiB/12179msec) 00:26:55.351 slat (usec): min=444, max=2028.4k, avg=13292.38, stdev=115099.81 00:26:55.351 clat (msec): min=276, max=5384, avg=1447.13, stdev=1619.47 00:26:55.351 lat (msec): min=278, max=5386, avg=1460.42, stdev=1627.99 00:26:55.351 clat percentiles (msec): 00:26:55.351 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 284], 00:26:55.351 | 30.00th=[ 292], 40.00th=[ 531], 50.00th=[ 844], 60.00th=[ 1011], 00:26:55.351 | 70.00th=[ 1250], 80.00th=[ 1536], 90.00th=[ 4665], 95.00th=[ 5000], 00:26:55.351 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5403], 99.95th=[ 5403], 00:26:55.351 | 99.99th=[ 5403] 00:26:55.351 bw ( KiB/s): min= 1497, max=458752, per=6.96%, avg=128375.00, stdev=141443.59, samples=10 00:26:55.351 iops : min= 1, max= 448, avg=125.30, stdev=138.18, samples=10 00:26:55.351 lat (msec) : 500=39.39%, 750=7.82%, 1000=11.14%, 2000=22.15%, >=2000=19.50% 00:26:55.351 cpu : usr=0.03%, sys=0.97%, ctx=1437, majf=0, minf=32769 00:26:55.351 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:26:55.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.351 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.351 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.351 job5: (groupid=0, jobs=1): err= 0: pid=182179: Wed Jul 24 14:25:22 2024 00:26:55.351 read: IOPS=61, BW=61.9MiB/s (64.9MB/s)(630MiB/10181msec) 00:26:55.351 slat (usec): min=51, max=2034.4k, avg=15956.01, stdev=149855.35 00:26:55.351 clat (msec): min=122, max=6399, avg=1111.45, stdev=1355.15 00:26:55.351 lat (msec): min=278, max=6405, avg=1127.41, stdev=1371.25 00:26:55.351 clat percentiles (msec): 00:26:55.351 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 292], 00:26:55.351 | 30.00th=[ 351], 40.00th=[ 426], 50.00th=[ 451], 60.00th=[ 523], 00:26:55.352 | 70.00th=[ 659], 80.00th=[ 2433], 90.00th=[ 2567], 95.00th=[ 2836], 00:26:55.352 | 99.00th=[ 6342], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409], 00:26:55.352 | 99.99th=[ 6409] 00:26:55.352 bw ( KiB/s): min= 2048, max=380928, per=11.15%, avg=205509.60, stdev=161582.13, samples=5 00:26:55.352 iops : min= 2, max= 372, avg=200.60, stdev=157.75, samples=5 00:26:55.352 lat (msec) : 250=0.16%, 500=57.30%, 750=14.76%, 1000=1.43%, >=2000=26.35% 00:26:55.352 cpu : usr=0.07%, sys=1.32%, ctx=699, majf=0, minf=32769 00:26:55.352 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:26:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.352 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:26:55.352 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.352 job5: (groupid=0, jobs=1): err= 0: pid=182180: Wed Jul 24 14:25:22 2024 00:26:55.352 read: IOPS=2, BW=2721KiB/s (2786kB/s)(27.0MiB/10162msec) 00:26:55.352 slat (usec): min=1878, max=2016.8k, avg=371832.08, stdev=742648.55 00:26:55.352 clat (msec): min=121, max=10005, avg=4948.66, stdev=2586.56 00:26:55.352 lat (msec): min=2136, max=10160, avg=5320.49, stdev=2587.55 00:26:55.352 clat percentiles (msec): 00:26:55.352 | 1.00th=[ 122], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 2232], 00:26:55.352 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 4396], 60.00th=[ 6409], 00:26:55.352 | 70.00th=[ 6544], 80.00th=[ 6544], 90.00th=[ 8658], 95.00th=[10000], 00:26:55.352 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:26:55.352 | 99.99th=[10000] 00:26:55.352 lat (msec) : 250=3.70%, >=2000=96.30% 00:26:55.352 cpu : usr=0.02%, sys=0.12%, ctx=66, majf=0, minf=6913 00:26:55.352 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:26:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.352 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:26:55.352 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.352 job5: (groupid=0, jobs=1): err= 0: pid=182181: Wed Jul 24 14:25:22 2024 00:26:55.352 read: IOPS=133, BW=133MiB/s (140MB/s)(1347MiB/10106msec) 00:26:55.352 slat (usec): min=66, max=2067.6k, avg=7431.19, stdev=57777.73 00:26:55.352 clat (msec): min=86, max=4207, avg=916.81, stdev=777.89 00:26:55.352 lat (msec): min=114, max=4208, avg=924.25, stdev=781.39 00:26:55.352 clat percentiles (msec): 00:26:55.352 | 1.00th=[ 271], 5.00th=[ 292], 10.00th=[ 292], 20.00th=[ 296], 00:26:55.352 | 30.00th=[ 300], 40.00th=[ 384], 50.00th=[ 659], 60.00th=[ 785], 00:26:55.352 | 70.00th=[ 1200], 80.00th=[ 1485], 90.00th=[ 1854], 95.00th=[ 2802], 00:26:55.352 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 4111], 99.95th=[ 4212], 00:26:55.352 | 99.99th=[ 4212] 00:26:55.352 bw ( KiB/s): min=20480, max=440320, per=8.46%, avg=155994.94, stdev=131978.38, samples=16 00:26:55.352 iops : min= 20, max= 430, avg=152.31, stdev=128.86, samples=16 00:26:55.352 lat (msec) : 100=0.07%, 250=0.67%, 500=44.39%, 750=13.73%, 1000=6.01% 00:26:55.352 lat (msec) : 2000=25.69%, >=2000=9.43% 00:26:55.352 cpu : usr=0.06%, sys=1.92%, ctx=2243, majf=0, minf=32769 00:26:55.352 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:26:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.352 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:55.352 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.352 job5: (groupid=0, jobs=1): err= 0: pid=182182: Wed Jul 24 14:25:22 2024 00:26:55.352 read: IOPS=48, BW=48.9MiB/s (51.3MB/s)(498MiB/10188msec) 00:26:55.352 slat (usec): min=45, max=2010.1k, avg=20196.58, stdev=177884.85 00:26:55.352 clat (msec): min=117, max=8147, avg=1274.61, stdev=2027.72 00:26:55.352 lat (msec): min=118, max=8149, avg=1294.81, stdev=2053.80 00:26:55.352 clat percentiles (msec): 00:26:55.352 | 1.00th=[ 118], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 120], 00:26:55.352 | 30.00th=[ 138], 40.00th=[ 190], 50.00th=[ 239], 60.00th=[ 334], 00:26:55.352 | 70.00th=[ 1586], 80.00th=[ 1636], 90.00th=[ 4530], 95.00th=[ 6678], 00:26:55.352 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:26:55.352 | 99.99th=[ 8154] 00:26:55.352 bw ( KiB/s): min=262144, max=495616, per=20.55%, avg=378880.00, stdev=165089.63, samples=2 00:26:55.352 iops : min= 256, max= 484, avg=370.00, stdev=161.22, samples=2 00:26:55.352 lat (msec) : 250=54.22%, 500=8.03%, 2000=24.10%, >=2000=13.65% 00:26:55.352 cpu : usr=0.03%, sys=1.12%, ctx=388, majf=0, minf=32769 00:26:55.352 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.3% 00:26:55.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.352 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:26:55.352 issued rwts: total=498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:55.352 00:26:55.352 Run status group 0 (all jobs): 00:26:55.352 READ: bw=1801MiB/s (1888MB/s), 250KiB/s-242MiB/s (256kB/s-254MB/s), io=25.2GiB (27.1GB), run=10045-14346msec 00:26:55.352 00:26:55.352 Disk stats (read/write): 00:26:55.352 nvme0n1: ios=16694/0, merge=0/0, ticks=12090490/0, in_queue=12090490, util=98.61% 00:26:55.352 nvme1n1: ios=15873/0, merge=0/0, ticks=9638840/0, in_queue=9638840, util=98.66% 00:26:55.352 nvme2n1: ios=25462/0, merge=0/0, ticks=10766748/0, in_queue=10766748, util=98.83% 00:26:55.352 nvme3n1: ios=45245/0, merge=0/0, ticks=11682353/0, in_queue=11682353, util=98.90% 00:26:55.352 nvme4n1: ios=29633/0, merge=0/0, ticks=10416315/0, in_queue=10416315, util=99.14% 00:26:55.352 nvme5n1: ios=72299/0, merge=0/0, ticks=11557154/0, in_queue=11557154, util=99.27% 00:26:55.635 14:25:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:26:55.635 14:25:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:26:55.635 14:25:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:55.635 14:25:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:26:57.003 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000000 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000000 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:57.003 14:25:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:57.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000001 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000001 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.932 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.933 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.933 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:57.933 14:25:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:59.302 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000002 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000002 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:26:59.302 14:25:26 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:00.233 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000003 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000003 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:00.233 14:25:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:01.166 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000004 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000004 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:01.166 14:25:28 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:02.538 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1215 -- # local i=0 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1216 -- # grep -q -w SPDK00000000000005 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # grep -q -w SPDK00000000000005 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # return 0 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:02.538 rmmod nvme_rdma 00:27:02.538 rmmod nvme_fabrics 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 180869 ']' 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 180869 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@946 -- # '[' -z 180869 ']' 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # kill -0 180869 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # uname 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 180869 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:02.538 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # echo 'killing process with pid 180869' 00:27:02.539 killing process with pid 180869 00:27:02.539 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@965 -- # kill 180869 00:27:02.539 14:25:29 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@970 -- # wait 180869 00:27:02.796 14:25:30 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.796 14:25:30 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:02.796 00:27:02.796 real 0m33.066s 00:27:02.796 user 2m1.225s 00:27:02.796 sys 0m9.511s 00:27:02.796 14:25:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:02.796 14:25:30 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:02.796 ************************************ 00:27:02.796 END TEST nvmf_srq_overwhelm 00:27:02.796 ************************************ 00:27:02.796 14:25:30 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:02.796 14:25:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:02.796 14:25:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:02.796 14:25:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:02.796 ************************************ 00:27:02.796 START TEST nvmf_shutdown 00:27:02.796 ************************************ 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:02.797 * Looking for test storage... 00:27:02.797 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:02.797 14:25:30 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:03.055 ************************************ 00:27:03.055 START TEST nvmf_shutdown_tc1 00:27:03.055 ************************************ 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.055 14:25:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:05.584 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:27:05.585 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:27:05.585 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:27:05.585 Found net devices under 0000:81:00.0: mlx_0_0 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:27:05.585 Found net devices under 0000:81:00.1: mlx_0_1 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:05.585 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.585 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:27:05.585 altname enp129s0f0np0 00:27:05.585 inet 192.168.100.8/24 scope global mlx_0_0 00:27:05.585 valid_lft forever preferred_lft forever 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:05.585 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:05.585 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:27:05.585 altname enp129s0f1np1 00:27:05.585 inet 192.168.100.9/24 scope global mlx_0_1 00:27:05.585 valid_lft forever preferred_lft forever 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:05.585 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:05.586 192.168.100.9' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:05.586 192.168.100.9' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:05.586 192.168.100.9' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=186844 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 186844 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 186844 ']' 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:05.586 14:25:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:05.586 [2024-07-24 14:25:32.747504] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:05.586 [2024-07-24 14:25:32.747591] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.586 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.586 [2024-07-24 14:25:32.816542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.586 [2024-07-24 14:25:32.906780] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.586 [2024-07-24 14:25:32.906873] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.586 [2024-07-24 14:25:32.906887] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.586 [2024-07-24 14:25:32.906898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.586 [2024-07-24 14:25:32.906908] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.586 [2024-07-24 14:25:32.907011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.586 [2024-07-24 14:25:32.907074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.586 [2024-07-24 14:25:32.907116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:05.586 [2024-07-24 14:25:32.907118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.844 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:05.844 [2024-07-24 14:25:33.098001] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aaccd0/0x1ab11c0) succeed. 00:27:05.844 [2024-07-24 14:25:33.108955] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aae2c0/0x1af2850) succeed. 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.102 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.103 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.103 Malloc1 00:27:06.103 [2024-07-24 14:25:33.324681] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:06.103 Malloc2 00:27:06.103 Malloc3 00:27:06.103 Malloc4 00:27:06.359 Malloc5 00:27:06.359 Malloc6 00:27:06.359 Malloc7 00:27:06.359 Malloc8 00:27:06.359 Malloc9 00:27:06.616 Malloc10 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=187024 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 187024 /var/tmp/bdevperf.sock 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 187024 ']' 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.617 { 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme$subsystem", 00:27:06.617 "trtype": "$TEST_TRANSPORT", 00:27:06.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "$NVMF_PORT", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.617 "hdgst": ${hdgst:-false}, 00:27:06.617 "ddgst": ${ddgst:-false} 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 } 00:27:06.617 EOF 00:27:06.617 )") 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:06.617 14:25:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme1", 00:27:06.617 "trtype": "rdma", 00:27:06.617 "traddr": "192.168.100.8", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "4420", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.617 "hdgst": false, 00:27:06.617 "ddgst": false 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 },{ 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme2", 00:27:06.617 "trtype": "rdma", 00:27:06.617 "traddr": "192.168.100.8", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "4420", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:06.617 "hdgst": false, 00:27:06.617 "ddgst": false 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 },{ 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme3", 00:27:06.617 "trtype": "rdma", 00:27:06.617 "traddr": "192.168.100.8", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "4420", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:06.617 "hdgst": false, 00:27:06.617 "ddgst": false 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 },{ 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme4", 00:27:06.617 "trtype": "rdma", 00:27:06.617 "traddr": "192.168.100.8", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "4420", 00:27:06.617 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:06.617 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:06.617 "hdgst": false, 00:27:06.617 "ddgst": false 00:27:06.617 }, 00:27:06.617 "method": "bdev_nvme_attach_controller" 00:27:06.617 },{ 00:27:06.617 "params": { 00:27:06.617 "name": "Nvme5", 00:27:06.617 "trtype": "rdma", 00:27:06.617 "traddr": "192.168.100.8", 00:27:06.617 "adrfam": "ipv4", 00:27:06.617 "trsvcid": "4420", 00:27:06.618 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:06.618 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:06.618 "hdgst": false, 00:27:06.618 "ddgst": false 00:27:06.618 }, 00:27:06.618 "method": "bdev_nvme_attach_controller" 00:27:06.618 },{ 00:27:06.618 "params": { 00:27:06.618 "name": "Nvme6", 00:27:06.618 "trtype": "rdma", 00:27:06.618 "traddr": "192.168.100.8", 00:27:06.618 "adrfam": "ipv4", 00:27:06.618 "trsvcid": "4420", 00:27:06.618 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:06.618 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:06.618 "hdgst": false, 00:27:06.618 "ddgst": false 00:27:06.618 }, 00:27:06.618 "method": "bdev_nvme_attach_controller" 00:27:06.618 },{ 00:27:06.618 "params": { 00:27:06.618 "name": "Nvme7", 00:27:06.618 "trtype": "rdma", 00:27:06.618 "traddr": "192.168.100.8", 00:27:06.618 "adrfam": "ipv4", 00:27:06.618 "trsvcid": "4420", 00:27:06.618 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:06.618 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:06.618 "hdgst": false, 00:27:06.618 "ddgst": false 00:27:06.618 }, 00:27:06.618 "method": "bdev_nvme_attach_controller" 00:27:06.618 },{ 00:27:06.618 "params": { 00:27:06.618 "name": "Nvme8", 00:27:06.618 "trtype": "rdma", 00:27:06.618 "traddr": "192.168.100.8", 00:27:06.618 "adrfam": "ipv4", 00:27:06.618 "trsvcid": "4420", 00:27:06.618 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:06.618 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:06.618 "hdgst": false, 00:27:06.618 "ddgst": false 00:27:06.618 }, 00:27:06.618 "method": "bdev_nvme_attach_controller" 00:27:06.618 },{ 00:27:06.618 "params": { 00:27:06.618 "name": "Nvme9", 00:27:06.618 "trtype": "rdma", 00:27:06.618 "traddr": "192.168.100.8", 00:27:06.618 "adrfam": "ipv4", 00:27:06.618 "trsvcid": "4420", 00:27:06.618 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:06.618 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:06.618 "hdgst": false, 00:27:06.618 "ddgst": false 00:27:06.618 }, 00:27:06.618 "method": "bdev_nvme_attach_controller" 00:27:06.618 },{ 00:27:06.618 "params": { 00:27:06.618 "name": "Nvme10", 00:27:06.618 "trtype": "rdma", 00:27:06.618 "traddr": "192.168.100.8", 00:27:06.618 "adrfam": "ipv4", 00:27:06.618 "trsvcid": "4420", 00:27:06.618 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:06.618 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:06.618 "hdgst": false, 00:27:06.618 "ddgst": false 00:27:06.618 }, 00:27:06.618 "method": "bdev_nvme_attach_controller" 00:27:06.618 }' 00:27:06.618 [2024-07-24 14:25:33.819228] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:06.618 [2024-07-24 14:25:33.819325] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:06.618 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.618 [2024-07-24 14:25:33.892374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.618 [2024-07-24 14:25:33.978390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 187024 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:07.550 14:25:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:08.922 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 187024 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 186844 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.922 { 00:27:08.922 "params": { 00:27:08.922 "name": "Nvme$subsystem", 00:27:08.922 "trtype": "$TEST_TRANSPORT", 00:27:08.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.922 "adrfam": "ipv4", 00:27:08.922 "trsvcid": "$NVMF_PORT", 00:27:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.922 "hdgst": ${hdgst:-false}, 00:27:08.922 "ddgst": ${ddgst:-false} 00:27:08.922 }, 00:27:08.922 "method": "bdev_nvme_attach_controller" 00:27:08.922 } 00:27:08.922 EOF 00:27:08.922 )") 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.922 { 00:27:08.922 "params": { 00:27:08.922 "name": "Nvme$subsystem", 00:27:08.922 "trtype": "$TEST_TRANSPORT", 00:27:08.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.922 "adrfam": "ipv4", 00:27:08.922 "trsvcid": "$NVMF_PORT", 00:27:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.922 "hdgst": ${hdgst:-false}, 00:27:08.922 "ddgst": ${ddgst:-false} 00:27:08.922 }, 00:27:08.922 "method": "bdev_nvme_attach_controller" 00:27:08.922 } 00:27:08.922 EOF 00:27:08.922 )") 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.922 { 00:27:08.922 "params": { 00:27:08.922 "name": "Nvme$subsystem", 00:27:08.922 "trtype": "$TEST_TRANSPORT", 00:27:08.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.922 "adrfam": "ipv4", 00:27:08.922 "trsvcid": "$NVMF_PORT", 00:27:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.922 "hdgst": ${hdgst:-false}, 00:27:08.922 "ddgst": ${ddgst:-false} 00:27:08.922 }, 00:27:08.922 "method": "bdev_nvme_attach_controller" 00:27:08.922 } 00:27:08.922 EOF 00:27:08.922 )") 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.922 { 00:27:08.922 "params": { 00:27:08.922 "name": "Nvme$subsystem", 00:27:08.922 "trtype": "$TEST_TRANSPORT", 00:27:08.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.922 "adrfam": "ipv4", 00:27:08.922 "trsvcid": "$NVMF_PORT", 00:27:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.922 "hdgst": ${hdgst:-false}, 00:27:08.922 "ddgst": ${ddgst:-false} 00:27:08.922 }, 00:27:08.922 "method": "bdev_nvme_attach_controller" 00:27:08.922 } 00:27:08.922 EOF 00:27:08.922 )") 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.922 { 00:27:08.922 "params": { 00:27:08.922 "name": "Nvme$subsystem", 00:27:08.922 "trtype": "$TEST_TRANSPORT", 00:27:08.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.922 "adrfam": "ipv4", 00:27:08.922 "trsvcid": "$NVMF_PORT", 00:27:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.922 "hdgst": ${hdgst:-false}, 00:27:08.922 "ddgst": ${ddgst:-false} 00:27:08.922 }, 00:27:08.922 "method": "bdev_nvme_attach_controller" 00:27:08.922 } 00:27:08.922 EOF 00:27:08.922 )") 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.922 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.922 { 00:27:08.922 "params": { 00:27:08.922 "name": "Nvme$subsystem", 00:27:08.922 "trtype": "$TEST_TRANSPORT", 00:27:08.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.922 "adrfam": "ipv4", 00:27:08.922 "trsvcid": "$NVMF_PORT", 00:27:08.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.922 "hdgst": ${hdgst:-false}, 00:27:08.922 "ddgst": ${ddgst:-false} 00:27:08.922 }, 00:27:08.922 "method": "bdev_nvme_attach_controller" 00:27:08.922 } 00:27:08.922 EOF 00:27:08.922 )") 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.923 { 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme$subsystem", 00:27:08.923 "trtype": "$TEST_TRANSPORT", 00:27:08.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "$NVMF_PORT", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.923 "hdgst": ${hdgst:-false}, 00:27:08.923 "ddgst": ${ddgst:-false} 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 } 00:27:08.923 EOF 00:27:08.923 )") 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.923 { 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme$subsystem", 00:27:08.923 "trtype": "$TEST_TRANSPORT", 00:27:08.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "$NVMF_PORT", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.923 "hdgst": ${hdgst:-false}, 00:27:08.923 "ddgst": ${ddgst:-false} 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 } 00:27:08.923 EOF 00:27:08.923 )") 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.923 { 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme$subsystem", 00:27:08.923 "trtype": "$TEST_TRANSPORT", 00:27:08.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "$NVMF_PORT", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.923 "hdgst": ${hdgst:-false}, 00:27:08.923 "ddgst": ${ddgst:-false} 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 } 00:27:08.923 EOF 00:27:08.923 )") 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:08.923 { 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme$subsystem", 00:27:08.923 "trtype": "$TEST_TRANSPORT", 00:27:08.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "$NVMF_PORT", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.923 "hdgst": ${hdgst:-false}, 00:27:08.923 "ddgst": ${ddgst:-false} 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 } 00:27:08.923 EOF 00:27:08.923 )") 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:08.923 14:25:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme1", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme2", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme3", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme4", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme5", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme6", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme7", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme8", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme9", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 },{ 00:27:08.923 "params": { 00:27:08.923 "name": "Nvme10", 00:27:08.923 "trtype": "rdma", 00:27:08.923 "traddr": "192.168.100.8", 00:27:08.923 "adrfam": "ipv4", 00:27:08.923 "trsvcid": "4420", 00:27:08.923 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:08.923 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:08.923 "hdgst": false, 00:27:08.923 "ddgst": false 00:27:08.923 }, 00:27:08.923 "method": "bdev_nvme_attach_controller" 00:27:08.923 }' 00:27:08.923 [2024-07-24 14:25:35.904319] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:08.923 [2024-07-24 14:25:35.904410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187211 ] 00:27:08.923 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.923 [2024-07-24 14:25:35.980604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.923 [2024-07-24 14:25:36.070574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.855 Running I/O for 1 seconds... 00:27:11.263 00:27:11.263 Latency(us) 00:27:11.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.264 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme1n1 : 1.20 307.80 19.24 0.00 0.00 203215.95 9272.13 257872.02 00:27:11.264 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme2n1 : 1.20 320.53 20.03 0.00 0.00 193861.40 17573.36 188743.68 00:27:11.264 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme3n1 : 1.20 320.06 20.00 0.00 0.00 190079.68 16699.54 180976.45 00:27:11.264 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme4n1 : 1.20 325.37 20.34 0.00 0.00 184792.77 5145.79 169325.61 00:27:11.264 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme5n1 : 1.20 318.99 19.94 0.00 0.00 185713.65 25049.32 157674.76 00:27:11.264 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme6n1 : 1.21 318.49 19.91 0.00 0.00 182841.39 25631.86 146800.64 00:27:11.264 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme7n1 : 1.21 318.06 19.88 0.00 0.00 179320.10 26020.22 139033.41 00:27:11.264 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme8n1 : 1.21 333.50 20.84 0.00 0.00 169874.92 4587.52 128159.29 00:27:11.264 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme9n1 : 1.22 329.06 20.57 0.00 0.00 169328.46 4320.52 119615.34 00:27:11.264 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.264 Verification LBA range: start 0x0 length 0x400 00:27:11.264 Nvme10n1 : 1.21 263.84 16.49 0.00 0.00 208032.81 17476.27 278066.82 00:27:11.264 =================================================================================================================== 00:27:11.264 Total : 3155.71 197.23 0.00 0.00 186108.33 4320.52 278066.82 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:11.264 rmmod nvme_rdma 00:27:11.264 rmmod nvme_fabrics 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 186844 ']' 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 186844 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 186844 ']' 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 186844 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:11.264 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 186844 00:27:11.522 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:11.522 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:11.522 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 186844' 00:27:11.522 killing process with pid 186844 00:27:11.522 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 186844 00:27:11.522 14:25:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 186844 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:12.089 00:27:12.089 real 0m9.000s 00:27:12.089 user 0m28.618s 00:27:12.089 sys 0m2.772s 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.089 ************************************ 00:27:12.089 END TEST nvmf_shutdown_tc1 00:27:12.089 ************************************ 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.089 ************************************ 00:27:12.089 START TEST nvmf_shutdown_tc2 00:27:12.089 ************************************ 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:12.089 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:27:12.090 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:27:12.090 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:27:12.090 Found net devices under 0000:81:00.0: mlx_0_0 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:27:12.090 Found net devices under 0000:81:00.1: mlx_0_1 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:12.090 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:12.090 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:27:12.090 altname enp129s0f0np0 00:27:12.090 inet 192.168.100.8/24 scope global mlx_0_0 00:27:12.090 valid_lft forever preferred_lft forever 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:12.090 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:12.090 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:27:12.090 altname enp129s0f1np1 00:27:12.090 inet 192.168.100.9/24 scope global mlx_0_1 00:27:12.090 valid_lft forever preferred_lft forever 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:12.090 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:12.091 192.168.100.9' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:12.091 192.168.100.9' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:12.091 192.168.100.9' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=187819 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 187819 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 187819 ']' 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.091 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.091 [2024-07-24 14:25:39.399773] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:12.091 [2024-07-24 14:25:39.399849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.091 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.349 [2024-07-24 14:25:39.469825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.349 [2024-07-24 14:25:39.553935] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.349 [2024-07-24 14:25:39.553998] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.349 [2024-07-24 14:25:39.554019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.349 [2024-07-24 14:25:39.554030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.349 [2024-07-24 14:25:39.554039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.349 [2024-07-24 14:25:39.554173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.349 [2024-07-24 14:25:39.554246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.349 [2024-07-24 14:25:39.554268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:12.349 [2024-07-24 14:25:39.554271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.349 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.349 [2024-07-24 14:25:39.707607] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6f2cd0/0x6f71c0) succeed. 00:27:12.349 [2024-07-24 14:25:39.718658] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6f42c0/0x738850) succeed. 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.607 14:25:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.607 Malloc1 00:27:12.607 [2024-07-24 14:25:39.945125] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:12.607 Malloc2 00:27:12.865 Malloc3 00:27:12.865 Malloc4 00:27:12.865 Malloc5 00:27:12.865 Malloc6 00:27:12.865 Malloc7 00:27:13.123 Malloc8 00:27:13.123 Malloc9 00:27:13.123 Malloc10 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=187916 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 187916 /var/tmp/bdevperf.sock 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 187916 ']' 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:13.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.123 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.123 { 00:27:13.123 "params": { 00:27:13.123 "name": "Nvme$subsystem", 00:27:13.123 "trtype": "$TEST_TRANSPORT", 00:27:13.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.123 "adrfam": "ipv4", 00:27:13.123 "trsvcid": "$NVMF_PORT", 00:27:13.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.123 "hdgst": ${hdgst:-false}, 00:27:13.123 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.124 { 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme$subsystem", 00:27:13.124 "trtype": "$TEST_TRANSPORT", 00:27:13.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "$NVMF_PORT", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.124 "hdgst": ${hdgst:-false}, 00:27:13.124 "ddgst": ${ddgst:-false} 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 } 00:27:13.124 EOF 00:27:13.124 )") 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:13.124 14:25:40 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme1", 00:27:13.124 "trtype": "rdma", 00:27:13.124 "traddr": "192.168.100.8", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "4420", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.124 "hdgst": false, 00:27:13.124 "ddgst": false 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 },{ 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme2", 00:27:13.124 "trtype": "rdma", 00:27:13.124 "traddr": "192.168.100.8", 00:27:13.124 "adrfam": "ipv4", 00:27:13.124 "trsvcid": "4420", 00:27:13.124 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:13.124 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:13.124 "hdgst": false, 00:27:13.124 "ddgst": false 00:27:13.124 }, 00:27:13.124 "method": "bdev_nvme_attach_controller" 00:27:13.124 },{ 00:27:13.124 "params": { 00:27:13.124 "name": "Nvme3", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 },{ 00:27:13.125 "params": { 00:27:13.125 "name": "Nvme4", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 },{ 00:27:13.125 "params": { 00:27:13.125 "name": "Nvme5", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 },{ 00:27:13.125 "params": { 00:27:13.125 "name": "Nvme6", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 },{ 00:27:13.125 "params": { 00:27:13.125 "name": "Nvme7", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 },{ 00:27:13.125 "params": { 00:27:13.125 "name": "Nvme8", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 },{ 00:27:13.125 "params": { 00:27:13.125 "name": "Nvme9", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 },{ 00:27:13.125 "params": { 00:27:13.125 "name": "Nvme10", 00:27:13.125 "trtype": "rdma", 00:27:13.125 "traddr": "192.168.100.8", 00:27:13.125 "adrfam": "ipv4", 00:27:13.125 "trsvcid": "4420", 00:27:13.125 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:13.125 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:13.125 "hdgst": false, 00:27:13.125 "ddgst": false 00:27:13.125 }, 00:27:13.125 "method": "bdev_nvme_attach_controller" 00:27:13.125 }' 00:27:13.125 [2024-07-24 14:25:40.462609] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:13.125 [2024-07-24 14:25:40.462712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187916 ] 00:27:13.383 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.383 [2024-07-24 14:25:40.536806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.383 [2024-07-24 14:25:40.622923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.315 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:14.315 Running I/O for 10 seconds... 00:27:14.315 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:14.315 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:14.315 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.315 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:14.573 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:14.831 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:14.831 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:14.831 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:14.831 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:14.831 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.831 14:25:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.831 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.831 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=91 00:27:14.831 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 91 -ge 100 ']' 00:27:14.831 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:15.089 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:15.089 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:15.089 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:15.089 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:15.089 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.089 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=219 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 219 -ge 100 ']' 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 187916 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 187916 ']' 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 187916 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 187916 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:15.346 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 187916' 00:27:15.347 killing process with pid 187916 00:27:15.347 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 187916 00:27:15.347 14:25:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 187916 00:27:15.604 Received shutdown signal, test time was about 1.195326 seconds 00:27:15.604 00:27:15.604 Latency(us) 00:27:15.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.604 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme1n1 : 1.17 293.06 18.32 0.00 0.00 214824.86 11602.30 231463.44 00:27:15.604 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme2n1 : 1.18 298.52 18.66 0.00 0.00 207839.99 11699.39 222142.77 00:27:15.604 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme3n1 : 1.18 325.92 20.37 0.00 0.00 187958.36 7233.23 164665.27 00:27:15.604 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme4n1 : 1.18 327.90 20.49 0.00 0.00 183763.94 5194.33 153014.42 00:27:15.604 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme5n1 : 1.18 324.72 20.30 0.00 0.00 182949.61 13204.29 140586.86 00:27:15.604 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme6n1 : 1.19 324.05 20.25 0.00 0.00 180606.55 16117.00 125829.12 00:27:15.604 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme7n1 : 1.19 323.38 20.21 0.00 0.00 178130.99 17476.27 112624.83 00:27:15.604 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme8n1 : 1.19 322.88 20.18 0.00 0.00 174427.02 18155.90 122722.23 00:27:15.604 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme9n1 : 1.19 322.20 20.14 0.00 0.00 173028.69 19320.98 132819.63 00:27:15.604 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:15.604 Verification LBA range: start 0x0 length 0x400 00:27:15.604 Nvme10n1 : 1.19 267.94 16.75 0.00 0.00 204340.64 12718.84 243891.01 00:27:15.604 =================================================================================================================== 00:27:15.604 Total : 3130.57 195.66 0.00 0.00 188063.71 5194.33 243891.01 00:27:15.862 14:25:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 187819 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:16.795 rmmod nvme_rdma 00:27:16.795 rmmod nvme_fabrics 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 187819 ']' 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 187819 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 187819 ']' 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 187819 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 187819 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 187819' 00:27:16.795 killing process with pid 187819 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 187819 00:27:16.795 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 187819 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:17.361 00:27:17.361 real 0m5.471s 00:27:17.361 user 0m22.404s 00:27:17.361 sys 0m1.044s 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.361 ************************************ 00:27:17.361 END TEST nvmf_shutdown_tc2 00:27:17.361 ************************************ 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:17.361 14:25:44 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.620 ************************************ 00:27:17.620 START TEST nvmf_shutdown_tc3 00:27:17.620 ************************************ 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:27:17.620 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:27:17.620 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:27:17.620 Found net devices under 0000:81:00.0: mlx_0_0 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:17.620 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:27:17.621 Found net devices under 0000:81:00.1: mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:17.621 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:17.621 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:27:17.621 altname enp129s0f0np0 00:27:17.621 inet 192.168.100.8/24 scope global mlx_0_0 00:27:17.621 valid_lft forever preferred_lft forever 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:17.621 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:17.621 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:27:17.621 altname enp129s0f1np1 00:27:17.621 inet 192.168.100.9/24 scope global mlx_0_1 00:27:17.621 valid_lft forever preferred_lft forever 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:17.621 192.168.100.9' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:17.621 192.168.100.9' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:17.621 192.168.100.9' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:17.621 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=188532 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 188532 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 188532 ']' 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.622 14:25:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.622 [2024-07-24 14:25:44.941604] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:17.622 [2024-07-24 14:25:44.941693] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.622 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.880 [2024-07-24 14:25:45.011883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.880 [2024-07-24 14:25:45.098999] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.880 [2024-07-24 14:25:45.099053] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.880 [2024-07-24 14:25:45.099077] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.880 [2024-07-24 14:25:45.099096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.880 [2024-07-24 14:25:45.099106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.880 [2024-07-24 14:25:45.099201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.880 [2024-07-24 14:25:45.099264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.880 [2024-07-24 14:25:45.099333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:17.880 [2024-07-24 14:25:45.099335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.880 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.138 [2024-07-24 14:25:45.276466] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20fbcd0/0x21001c0) succeed. 00:27:18.138 [2024-07-24 14:25:45.287848] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20fd2c0/0x2141850) succeed. 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.138 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.138 Malloc1 00:27:18.396 [2024-07-24 14:25:45.530278] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:18.396 Malloc2 00:27:18.396 Malloc3 00:27:18.396 Malloc4 00:27:18.396 Malloc5 00:27:18.654 Malloc6 00:27:18.654 Malloc7 00:27:18.654 Malloc8 00:27:18.654 Malloc9 00:27:18.654 Malloc10 00:27:18.654 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.654 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:18.654 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.654 14:25:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=188705 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 188705 /var/tmp/bdevperf.sock 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 188705 ']' 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:18.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.654 { 00:27:18.654 "params": { 00:27:18.654 "name": "Nvme$subsystem", 00:27:18.654 "trtype": "$TEST_TRANSPORT", 00:27:18.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.654 "adrfam": "ipv4", 00:27:18.654 "trsvcid": "$NVMF_PORT", 00:27:18.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.654 "hdgst": ${hdgst:-false}, 00:27:18.654 "ddgst": ${ddgst:-false} 00:27:18.654 }, 00:27:18.654 "method": "bdev_nvme_attach_controller" 00:27:18.654 } 00:27:18.654 EOF 00:27:18.654 )") 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.654 { 00:27:18.654 "params": { 00:27:18.654 "name": "Nvme$subsystem", 00:27:18.654 "trtype": "$TEST_TRANSPORT", 00:27:18.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.654 "adrfam": "ipv4", 00:27:18.654 "trsvcid": "$NVMF_PORT", 00:27:18.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.654 "hdgst": ${hdgst:-false}, 00:27:18.654 "ddgst": ${ddgst:-false} 00:27:18.654 }, 00:27:18.654 "method": "bdev_nvme_attach_controller" 00:27:18.654 } 00:27:18.654 EOF 00:27:18.654 )") 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.654 { 00:27:18.654 "params": { 00:27:18.654 "name": "Nvme$subsystem", 00:27:18.654 "trtype": "$TEST_TRANSPORT", 00:27:18.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.654 "adrfam": "ipv4", 00:27:18.654 "trsvcid": "$NVMF_PORT", 00:27:18.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.654 "hdgst": ${hdgst:-false}, 00:27:18.654 "ddgst": ${ddgst:-false} 00:27:18.654 }, 00:27:18.654 "method": "bdev_nvme_attach_controller" 00:27:18.654 } 00:27:18.654 EOF 00:27:18.654 )") 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.654 { 00:27:18.654 "params": { 00:27:18.654 "name": "Nvme$subsystem", 00:27:18.654 "trtype": "$TEST_TRANSPORT", 00:27:18.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.654 "adrfam": "ipv4", 00:27:18.654 "trsvcid": "$NVMF_PORT", 00:27:18.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.654 "hdgst": ${hdgst:-false}, 00:27:18.654 "ddgst": ${ddgst:-false} 00:27:18.654 }, 00:27:18.654 "method": "bdev_nvme_attach_controller" 00:27:18.654 } 00:27:18.654 EOF 00:27:18.654 )") 00:27:18.654 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.913 { 00:27:18.913 "params": { 00:27:18.913 "name": "Nvme$subsystem", 00:27:18.913 "trtype": "$TEST_TRANSPORT", 00:27:18.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.913 "adrfam": "ipv4", 00:27:18.913 "trsvcid": "$NVMF_PORT", 00:27:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.913 "hdgst": ${hdgst:-false}, 00:27:18.913 "ddgst": ${ddgst:-false} 00:27:18.913 }, 00:27:18.913 "method": "bdev_nvme_attach_controller" 00:27:18.913 } 00:27:18.913 EOF 00:27:18.913 )") 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.913 { 00:27:18.913 "params": { 00:27:18.913 "name": "Nvme$subsystem", 00:27:18.913 "trtype": "$TEST_TRANSPORT", 00:27:18.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.913 "adrfam": "ipv4", 00:27:18.913 "trsvcid": "$NVMF_PORT", 00:27:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.913 "hdgst": ${hdgst:-false}, 00:27:18.913 "ddgst": ${ddgst:-false} 00:27:18.913 }, 00:27:18.913 "method": "bdev_nvme_attach_controller" 00:27:18.913 } 00:27:18.913 EOF 00:27:18.913 )") 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.913 { 00:27:18.913 "params": { 00:27:18.913 "name": "Nvme$subsystem", 00:27:18.913 "trtype": "$TEST_TRANSPORT", 00:27:18.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.913 "adrfam": "ipv4", 00:27:18.913 "trsvcid": "$NVMF_PORT", 00:27:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.913 "hdgst": ${hdgst:-false}, 00:27:18.913 "ddgst": ${ddgst:-false} 00:27:18.913 }, 00:27:18.913 "method": "bdev_nvme_attach_controller" 00:27:18.913 } 00:27:18.913 EOF 00:27:18.913 )") 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.913 { 00:27:18.913 "params": { 00:27:18.913 "name": "Nvme$subsystem", 00:27:18.913 "trtype": "$TEST_TRANSPORT", 00:27:18.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.913 "adrfam": "ipv4", 00:27:18.913 "trsvcid": "$NVMF_PORT", 00:27:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.913 "hdgst": ${hdgst:-false}, 00:27:18.913 "ddgst": ${ddgst:-false} 00:27:18.913 }, 00:27:18.913 "method": "bdev_nvme_attach_controller" 00:27:18.913 } 00:27:18.913 EOF 00:27:18.913 )") 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.913 { 00:27:18.913 "params": { 00:27:18.913 "name": "Nvme$subsystem", 00:27:18.913 "trtype": "$TEST_TRANSPORT", 00:27:18.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.913 "adrfam": "ipv4", 00:27:18.913 "trsvcid": "$NVMF_PORT", 00:27:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.913 "hdgst": ${hdgst:-false}, 00:27:18.913 "ddgst": ${ddgst:-false} 00:27:18.913 }, 00:27:18.913 "method": "bdev_nvme_attach_controller" 00:27:18.913 } 00:27:18.913 EOF 00:27:18.913 )") 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.913 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.913 { 00:27:18.913 "params": { 00:27:18.913 "name": "Nvme$subsystem", 00:27:18.913 "trtype": "$TEST_TRANSPORT", 00:27:18.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.913 "adrfam": "ipv4", 00:27:18.913 "trsvcid": "$NVMF_PORT", 00:27:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.913 "hdgst": ${hdgst:-false}, 00:27:18.913 "ddgst": ${ddgst:-false} 00:27:18.913 }, 00:27:18.913 "method": "bdev_nvme_attach_controller" 00:27:18.913 } 00:27:18.913 EOF 00:27:18.913 )") 00:27:18.914 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:18.914 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:18.914 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:18.914 14:25:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme1", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme2", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme3", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme4", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme5", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme6", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme7", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme8", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme9", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 },{ 00:27:18.914 "params": { 00:27:18.914 "name": "Nvme10", 00:27:18.914 "trtype": "rdma", 00:27:18.914 "traddr": "192.168.100.8", 00:27:18.914 "adrfam": "ipv4", 00:27:18.914 "trsvcid": "4420", 00:27:18.914 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:18.914 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:18.914 "hdgst": false, 00:27:18.914 "ddgst": false 00:27:18.914 }, 00:27:18.914 "method": "bdev_nvme_attach_controller" 00:27:18.914 }' 00:27:18.914 [2024-07-24 14:25:46.051288] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:18.914 [2024-07-24 14:25:46.051389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188705 ] 00:27:18.914 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.914 [2024-07-24 14:25:46.126784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.914 [2024-07-24 14:25:46.212664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.846 Running I/O for 10 seconds... 00:27:19.846 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:19.846 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:19.846 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:19.846 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.846 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:20.104 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:20.362 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:20.362 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:20.362 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:20.362 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:20.362 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.362 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=115 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 115 -ge 100 ']' 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 188532 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 188532 ']' 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 188532 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 188532 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 188532' 00:27:20.620 killing process with pid 188532 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 188532 00:27:20.620 14:25:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 188532 00:27:20.620 [2024-07-24 14:25:47.831259] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 4 00:27:21.185 14:25:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:21.185 14:25:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:21.760 [2024-07-24 14:25:48.894699] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192569c0 was disconnected and freed. reset controller. 00:27:21.760 [2024-07-24 14:25:48.897125] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256740 was disconnected and freed. reset controller. 00:27:21.760 [2024-07-24 14:25:48.899719] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192564c0 was disconnected and freed. reset controller. 00:27:21.760 [2024-07-24 14:25:48.901839] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256240 was disconnected and freed. reset controller. 00:27:21.760 [2024-07-24 14:25:48.901882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.901903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.901934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.901950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.901967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.901990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d3fa80 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d1f980 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019caf600 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c7f480 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c1f180 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182f00 00:27:21.760 [2024-07-24 14:25:48.902619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x183000 00:27:21.760 [2024-07-24 14:25:48.902647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x183000 00:27:21.760 [2024-07-24 14:25:48.902675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x183000 00:27:21.760 [2024-07-24 14:25:48.902706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.760 [2024-07-24 14:25:48.902723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f9fd80 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4fb00 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.902967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.902986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f1f980 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ecf700 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e9f580 len:0x10000 key:0x183000 00:27:21.761 [2024-07-24 14:25:48.903302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.903321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019aafc00 len:0x10000 key:0x182e00 00:27:21.761 [2024-07-24 14:25:48.903334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:9095 p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906012] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:27:21.761 [2024-07-24 14:25:48.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.761 [2024-07-24 14:25:48.906486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x183400 00:27:21.761 [2024-07-24 14:25:48.906499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183400 00:27:21.762 [2024-07-24 14:25:48.906786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.906826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.906853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.906879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.906906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.906932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.906958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.906976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.906989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.907015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x183200 00:27:21.762 [2024-07-24 14:25:48.907041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x183100 00:27:21.762 [2024-07-24 14:25:48.907068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c777000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c756000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c735000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c714000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124cb000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012489000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012468000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012447000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012426000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fddd000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdbc000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd9b000 len:0x10000 key:0x183f00 00:27:21.762 [2024-07-24 14:25:48.907475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.762 [2024-07-24 14:25:48.907489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd7a000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011133000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011112000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110f1000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110d0000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012db0000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba0f000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.907834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b9ee000 len:0x10000 key:0x183f00 00:27:21.763 [2024-07-24 14:25:48.907846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:09fd p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910250] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:27:21.763 [2024-07-24 14:25:48.910289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183300 00:27:21.763 [2024-07-24 14:25:48.910311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183300 00:27:21.763 [2024-07-24 14:25:48.910347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183300 00:27:21.763 [2024-07-24 14:25:48.910375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183300 00:27:21.763 [2024-07-24 14:25:48.910401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183500 00:27:21.763 [2024-07-24 14:25:48.910921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.763 [2024-07-24 14:25:48.910935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.910947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.910965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.910978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.910993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.911005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.911032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.911059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.911085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.911127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.911153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183500 00:27:21.764 [2024-07-24 14:25:48.911178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x183200 00:27:21.764 [2024-07-24 14:25:48.911204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9bd000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f99c000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97b000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95a000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f939000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b5000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f894000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f873000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f852000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f831000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f810000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b949000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b928000 len:0x10000 key:0x183f00 00:27:21.764 [2024-07-24 14:25:48.911687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.764 [2024-07-24 14:25:48.911701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.911713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.911726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e6000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.911739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.911753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.911764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.911807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.911823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.911837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.911850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.923758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.923817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.923838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.923858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.923874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.923888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.923902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.923915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.923929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.923942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.923957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.923970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.923984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.923997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.924012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9b000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.924025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.924039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x183f00 00:27:21.765 [2024-07-24 14:25:48.924052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:317b p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926624] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:27:21.765 [2024-07-24 14:25:48.926664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.926983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.926996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.927023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.927050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.927077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.927119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.927151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.927177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x184000 00:27:21.765 [2024-07-24 14:25:48.927204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183900 00:27:21.765 [2024-07-24 14:25:48.927230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183900 00:27:21.765 [2024-07-24 14:25:48.927256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183900 00:27:21.765 [2024-07-24 14:25:48.927283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.765 [2024-07-24 14:25:48.927298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183900 00:27:21.766 [2024-07-24 14:25:48.927588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x184400 00:27:21.766 [2024-07-24 14:25:48.927614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd59000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcd5000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcb4000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc93000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc72000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc51000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fc30000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012405000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123e4000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.927986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.927998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012360000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x183f00 00:27:21.766 [2024-07-24 14:25:48.928236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.766 [2024-07-24 14:25:48.928250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd06000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc82000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.928529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x183f00 00:27:21.767 [2024-07-24 14:25:48.928542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:2907 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.930942] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:27:21.767 [2024-07-24 14:25:48.930983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183b00 00:27:21.767 [2024-07-24 14:25:48.931426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x184300 00:27:21.767 [2024-07-24 14:25:48.931666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.767 [2024-07-24 14:25:48.931680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.931982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.931995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x184300 00:27:21.768 [2024-07-24 14:25:48.932302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183700 00:27:21.768 [2024-07-24 14:25:48.932328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183900 00:27:21.768 [2024-07-24 14:25:48.932353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012570000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012591000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125b2000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125d3000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000125f4000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012615000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012636000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012657000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012678000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012699000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126ba000 len:0x10000 key:0x183f00 00:27:21.768 [2024-07-24 14:25:48.932647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.768 [2024-07-24 14:25:48.932661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126db000 len:0x10000 key:0x183f00 00:27:21.769 [2024-07-24 14:25:48.932673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.932687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126fc000 len:0x10000 key:0x183f00 00:27:21.769 [2024-07-24 14:25:48.932699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.932717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001271d000 len:0x10000 key:0x183f00 00:27:21.769 [2024-07-24 14:25:48.932730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.932744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001273e000 len:0x10000 key:0x183f00 00:27:21.769 [2024-07-24 14:25:48.932757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.932787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001275f000 len:0x10000 key:0x183f00 00:27:21.769 [2024-07-24 14:25:48.932813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:a875 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935285] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:27:21.769 [2024-07-24 14:25:48.935323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183700 00:27:21.769 [2024-07-24 14:25:48.935677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.935914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.935926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.947517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x184100 00:27:21.769 [2024-07-24 14:25:48.947546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.769 [2024-07-24 14:25:48.947563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.947976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.947989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.948032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.948061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.948088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.948116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.948177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x184100 00:27:21.770 [2024-07-24 14:25:48.948205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x183d00 00:27:21.770 [2024-07-24 14:25:48.948625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.770 [2024-07-24 14:25:48.948639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x183d00 00:27:21.771 [2024-07-24 14:25:48.948651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.948665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x183d00 00:27:21.771 [2024-07-24 14:25:48.948678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.948692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x183d00 00:27:21.771 [2024-07-24 14:25:48.948704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.948718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x183d00 00:27:21.771 [2024-07-24 14:25:48.948730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.948745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x183d00 00:27:21.771 [2024-07-24 14:25:48.948758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.948796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183700 00:27:21.771 [2024-07-24 14:25:48.948812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52846 cdw0:c592e000 sqhd:0ea7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.951847] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:27:21.771 [2024-07-24 14:25:48.951940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.951962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f1ce p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.951976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.951989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f1ce p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.952003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.952015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f1ce p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.952027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.952039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f1ce p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.954555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.771 [2024-07-24 14:25:48.954600] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:21.771 [2024-07-24 14:25:48.954614] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.771 [2024-07-24 14:25:48.954635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.954651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:c24c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.954665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.954677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:c24c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.954689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.954701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:c24c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.954714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.954726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:c24c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.957149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.771 [2024-07-24 14:25:48.957193] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:21.771 [2024-07-24 14:25:48.957207] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.771 [2024-07-24 14:25:48.957228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.957249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:52b7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.957263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.957275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:52b7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.957287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.957299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:52b7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.957312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.957323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:52b7 p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.959610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.771 [2024-07-24 14:25:48.959640] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:21.771 [2024-07-24 14:25:48.959653] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.771 [2024-07-24 14:25:48.959674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.959690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e16c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.959712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.959724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e16c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.959737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.959749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e16c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.959761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.959787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e16c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.961767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.771 [2024-07-24 14:25:48.961825] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:21.771 [2024-07-24 14:25:48.961843] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.771 [2024-07-24 14:25:48.961866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.961898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f3db p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.961912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.961925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f3db p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.961937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.961961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f3db p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.961975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.961987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:f3db p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.964177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.771 [2024-07-24 14:25:48.964221] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:21.771 [2024-07-24 14:25:48.964235] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.771 [2024-07-24 14:25:48.964255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.964271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:246c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.964285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.964303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:246c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.964316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.964328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:246c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.964340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.771 [2024-07-24 14:25:48.964353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:246c p:1 m:0 dnr:0 00:27:21.771 [2024-07-24 14:25:48.966447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.771 [2024-07-24 14:25:48.966478] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:21.772 [2024-07-24 14:25:48.966507] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.966529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.966551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9819 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.966565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.966577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9819 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.966590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.966602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9819 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.966615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.966627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9819 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.969097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.772 [2024-07-24 14:25:48.969150] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:21.772 [2024-07-24 14:25:48.969166] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.969187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.969224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:d664 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.969238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.969251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:d664 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.969263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.969275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:d664 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.969290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.969301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:d664 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.971250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.772 [2024-07-24 14:25:48.971295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:21.772 [2024-07-24 14:25:48.971309] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.971330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.971346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e587 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.971360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.971373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e587 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.971386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.971397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e587 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.971410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.971421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:e587 p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.973362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.772 [2024-07-24 14:25:48.973406] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:21.772 [2024-07-24 14:25:48.973419] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.973440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.973455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9ddb p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.973470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.973488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9ddb p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.973501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.973513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9ddb p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.973525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:21.772 [2024-07-24 14:25:48.973537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:52846 cdw0:0 sqhd:9ddb p:1 m:0 dnr:0 00:27:21.772 [2024-07-24 14:25:48.989788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:21.772 [2024-07-24 14:25:48.989833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:21.772 [2024-07-24 14:25:48.989847] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.999706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:21.772 [2024-07-24 14:25:48.999737] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:21.772 [2024-07-24 14:25:48.999754] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:21.772 [2024-07-24 14:25:48.999814] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.999838] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.999855] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.999873] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.999894] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.999910] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:48.999926] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:21.772 [2024-07-24 14:25:49.000023] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:21.772 [2024-07-24 14:25:49.000045] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:21.772 [2024-07-24 14:25:49.000061] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:21.772 [2024-07-24 14:25:49.000081] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:21.772 [2024-07-24 14:25:49.002485] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:21.772 task offset: 30720 on job bdev=Nvme1n1 fails 00:27:21.772 00:27:21.772 Latency(us) 00:27:21.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.772 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme1n1 ended in about 1.88 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme1n1 : 1.88 114.90 7.18 34.05 0.00 427042.72 9466.31 1087412.15 00:27:21.772 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme2n1 ended in about 1.88 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme2n1 : 1.88 118.04 7.38 34.03 0.00 414258.81 7378.87 1087412.15 00:27:21.772 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme3n1 ended in about 1.88 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme3n1 : 1.88 119.04 7.44 34.01 0.00 407678.93 21262.79 1093625.93 00:27:21.772 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme4n1 ended in about 1.88 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme4n1 : 1.88 118.98 7.44 33.99 0.00 403937.74 31651.46 1093625.93 00:27:21.772 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme5n1 ended in about 1.88 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme5n1 : 1.88 110.42 6.90 33.98 0.00 424002.93 39418.69 1193046.47 00:27:21.772 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme6n1 ended in about 1.88 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme6n1 : 1.88 101.87 6.37 33.96 0.00 446415.08 45244.11 1174405.12 00:27:21.772 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme7n1 ended in about 1.89 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme7n1 : 1.89 101.82 6.36 33.94 0.00 441864.34 55147.33 1168191.34 00:27:21.772 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme8n1 ended in about 1.89 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme8n1 : 1.89 101.77 6.36 33.92 0.00 438000.45 62137.84 1155763.77 00:27:21.772 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.772 Job: Nvme9n1 ended in about 1.89 seconds with error 00:27:21.772 Verification LBA range: start 0x0 length 0x400 00:27:21.772 Nvme9n1 : 1.89 101.71 6.36 33.90 0.00 433452.37 50098.63 1149549.99 00:27:21.773 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.773 Job: Nvme10n1 ended in about 1.89 seconds with error 00:27:21.773 Verification LBA range: start 0x0 length 0x400 00:27:21.773 Nvme10n1 : 1.89 67.77 4.24 33.89 0.00 572245.59 81555.91 1137122.42 00:27:21.773 =================================================================================================================== 00:27:21.773 Total : 1056.32 66.02 339.66 0.00 436306.84 7378.87 1193046.47 00:27:21.773 [2024-07-24 14:25:49.026924] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:21.773 [2024-07-24 14:25:49.027014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:21.773 [2024-07-24 14:25:49.027047] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:21.773 [2024-07-24 14:25:49.037375] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.037405] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.037418] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:21.773 [2024-07-24 14:25:49.037511] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.037530] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.037541] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:27:21.773 [2024-07-24 14:25:49.037629] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.037647] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.037667] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:27:21.773 [2024-07-24 14:25:49.041241] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.041265] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.041276] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:27:21.773 [2024-07-24 14:25:49.041423] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.041442] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.041453] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:27:21.773 [2024-07-24 14:25:49.041575] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.041595] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.041606] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c60c0 00:27:21.773 [2024-07-24 14:25:49.041734] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.041753] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.041763] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:27:21.773 [2024-07-24 14:25:49.042569] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.042592] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.042603] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928dd80 00:27:21.773 [2024-07-24 14:25:49.042724] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.042743] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.042756] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928ee00 00:27:21.773 [2024-07-24 14:25:49.042865] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:21.773 [2024-07-24 14:25:49.042885] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:21.773 [2024-07-24 14:25:49.042895] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928d5c0 00:27:22.339 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 188705 00:27:22.339 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:22.339 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:22.340 rmmod nvme_rdma 00:27:22.340 rmmod nvme_fabrics 00:27:22.340 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 188705 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:22.340 00:27:22.340 real 0m4.714s 00:27:22.340 user 0m15.769s 00:27:22.340 sys 0m1.146s 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:22.340 ************************************ 00:27:22.340 END TEST nvmf_shutdown_tc3 00:27:22.340 ************************************ 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:22.340 00:27:22.340 real 0m19.393s 00:27:22.340 user 1m6.877s 00:27:22.340 sys 0m5.097s 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:22.340 14:25:49 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:22.340 ************************************ 00:27:22.340 END TEST nvmf_shutdown 00:27:22.340 ************************************ 00:27:22.340 14:25:49 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.340 14:25:49 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.340 14:25:49 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:22.340 14:25:49 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.340 ************************************ 00:27:22.340 START TEST nvmf_multicontroller 00:27:22.340 ************************************ 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:27:22.340 * Looking for test storage... 00:27:22.340 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:27:22.340 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:27:22.340 00:27:22.340 real 0m0.068s 00:27:22.340 user 0m0.026s 00:27:22.340 sys 0m0.048s 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:22.340 14:25:49 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.340 ************************************ 00:27:22.340 END TEST nvmf_multicontroller 00:27:22.340 ************************************ 00:27:22.340 14:25:49 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:27:22.340 14:25:49 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:22.341 14:25:49 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:22.341 14:25:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.341 ************************************ 00:27:22.341 START TEST nvmf_aer 00:27:22.341 ************************************ 00:27:22.341 14:25:49 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:27:22.599 * Looking for test storage... 00:27:22.599 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.599 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.600 14:25:49 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:27:25.158 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:27:25.158 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:27:25.158 Found net devices under 0000:81:00.0: mlx_0_0 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:27:25.158 Found net devices under 0000:81:00.1: mlx_0_1 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:25.158 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:25.159 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:25.159 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:27:25.159 altname enp129s0f0np0 00:27:25.159 inet 192.168.100.8/24 scope global mlx_0_0 00:27:25.159 valid_lft forever preferred_lft forever 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:25.159 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:25.159 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:27:25.159 altname enp129s0f1np1 00:27:25.159 inet 192.168.100.9/24 scope global mlx_0_1 00:27:25.159 valid_lft forever preferred_lft forever 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:25.159 192.168.100.9' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:25.159 192.168.100.9' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:25.159 192.168.100.9' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=191271 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 191271 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 191271 ']' 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:25.159 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.159 [2024-07-24 14:25:52.277958] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:25.159 [2024-07-24 14:25:52.278044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.159 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.160 [2024-07-24 14:25:52.351581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.160 [2024-07-24 14:25:52.440873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.160 [2024-07-24 14:25:52.440935] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.160 [2024-07-24 14:25:52.440959] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.160 [2024-07-24 14:25:52.440974] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.160 [2024-07-24 14:25:52.440986] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.160 [2024-07-24 14:25:52.441069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.160 [2024-07-24 14:25:52.441135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.160 [2024-07-24 14:25:52.441228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.160 [2024-07-24 14:25:52.441230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.418 [2024-07-24 14:25:52.618459] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x208b9e0/0x208fed0) succeed. 00:27:25.418 [2024-07-24 14:25:52.629501] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x208cfd0/0x20d1560) succeed. 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.418 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.676 Malloc0 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.676 [2024-07-24 14:25:52.815676] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.676 [ 00:27:25.676 { 00:27:25.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.676 "subtype": "Discovery", 00:27:25.676 "listen_addresses": [], 00:27:25.676 "allow_any_host": true, 00:27:25.676 "hosts": [] 00:27:25.676 }, 00:27:25.676 { 00:27:25.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.676 "subtype": "NVMe", 00:27:25.676 "listen_addresses": [ 00:27:25.676 { 00:27:25.676 "trtype": "RDMA", 00:27:25.676 "adrfam": "IPv4", 00:27:25.676 "traddr": "192.168.100.8", 00:27:25.676 "trsvcid": "4420" 00:27:25.676 } 00:27:25.676 ], 00:27:25.676 "allow_any_host": true, 00:27:25.676 "hosts": [], 00:27:25.676 "serial_number": "SPDK00000000000001", 00:27:25.676 "model_number": "SPDK bdev Controller", 00:27:25.676 "max_namespaces": 2, 00:27:25.676 "min_cntlid": 1, 00:27:25.676 "max_cntlid": 65519, 00:27:25.676 "namespaces": [ 00:27:25.676 { 00:27:25.676 "nsid": 1, 00:27:25.676 "bdev_name": "Malloc0", 00:27:25.676 "name": "Malloc0", 00:27:25.676 "nguid": "8969184D8B874089B1E82C0590937B43", 00:27:25.676 "uuid": "8969184d-8b87-4089-b1e8-2c0590937b43" 00:27:25.676 } 00:27:25.676 ] 00:27:25.676 } 00:27:25.676 ] 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=191303 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:25.676 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:27:25.676 14:25:52 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:27:25.676 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.676 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:25.676 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:27:25.676 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:25.676 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.676 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.935 Malloc1 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.935 [ 00:27:25.935 { 00:27:25.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:25.935 "subtype": "Discovery", 00:27:25.935 "listen_addresses": [], 00:27:25.935 "allow_any_host": true, 00:27:25.935 "hosts": [] 00:27:25.935 }, 00:27:25.935 { 00:27:25.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.935 "subtype": "NVMe", 00:27:25.935 "listen_addresses": [ 00:27:25.935 { 00:27:25.935 "trtype": "RDMA", 00:27:25.935 "adrfam": "IPv4", 00:27:25.935 "traddr": "192.168.100.8", 00:27:25.935 "trsvcid": "4420" 00:27:25.935 } 00:27:25.935 ], 00:27:25.935 "allow_any_host": true, 00:27:25.935 "hosts": [], 00:27:25.935 "serial_number": "SPDK00000000000001", 00:27:25.935 "model_number": "SPDK bdev Controller", 00:27:25.935 "max_namespaces": 2, 00:27:25.935 "min_cntlid": 1, 00:27:25.935 "max_cntlid": 65519, 00:27:25.935 "namespaces": [ 00:27:25.935 { 00:27:25.935 "nsid": 1, 00:27:25.935 "bdev_name": "Malloc0", 00:27:25.935 "name": "Malloc0", 00:27:25.935 "nguid": "8969184D8B874089B1E82C0590937B43", 00:27:25.935 "uuid": "8969184d-8b87-4089-b1e8-2c0590937b43" 00:27:25.935 }, 00:27:25.935 { 00:27:25.935 "nsid": 2, 00:27:25.935 "bdev_name": "Malloc1", 00:27:25.935 "name": "Malloc1", 00:27:25.935 "nguid": "FA3BDA2E58A2409BAE6B8F28A9E199C6", 00:27:25.935 "uuid": "fa3bda2e-58a2-409b-ae6b-8f28a9e199c6" 00:27:25.935 } 00:27:25.935 ] 00:27:25.935 } 00:27:25.935 ] 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 191303 00:27:25.935 Asynchronous Event Request test 00:27:25.935 Attaching to 192.168.100.8 00:27:25.935 Attached to 192.168.100.8 00:27:25.935 Registering asynchronous event callbacks... 00:27:25.935 Starting namespace attribute notice tests for all controllers... 00:27:25.935 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:25.935 aer_cb - Changed Namespace 00:27:25.935 Cleaning up... 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.935 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:25.936 rmmod nvme_rdma 00:27:25.936 rmmod nvme_fabrics 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 191271 ']' 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 191271 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 191271 ']' 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 191271 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 191271 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 191271' 00:27:25.936 killing process with pid 191271 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@965 -- # kill 191271 00:27:25.936 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@970 -- # wait 191271 00:27:26.193 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.193 14:25:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:26.193 00:27:26.193 real 0m3.887s 00:27:26.193 user 0m5.106s 00:27:26.193 sys 0m2.155s 00:27:26.193 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:26.193 14:25:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:26.193 ************************************ 00:27:26.193 END TEST nvmf_aer 00:27:26.193 ************************************ 00:27:26.452 14:25:53 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:27:26.452 14:25:53 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:26.452 14:25:53 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:26.452 14:25:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:26.452 ************************************ 00:27:26.452 START TEST nvmf_async_init 00:27:26.452 ************************************ 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:27:26.452 * Looking for test storage... 00:27:26.452 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=230433055d334368baf29d53507f7dd6 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.452 14:25:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:27:28.982 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:27:28.982 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:28.982 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:27:28.983 Found net devices under 0000:81:00.0: mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:27:28.983 Found net devices under 0000:81:00.1: mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:28.983 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:28.983 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:27:28.983 altname enp129s0f0np0 00:27:28.983 inet 192.168.100.8/24 scope global mlx_0_0 00:27:28.983 valid_lft forever preferred_lft forever 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:28.983 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:28.983 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:27:28.983 altname enp129s0f1np1 00:27:28.983 inet 192.168.100.9/24 scope global mlx_0_1 00:27:28.983 valid_lft forever preferred_lft forever 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:28.983 192.168.100.9' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:28.983 192.168.100.9' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:28.983 192.168.100.9' 00:27:28.983 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=193405 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 193405 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 193405 ']' 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:28.984 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:28.984 [2024-07-24 14:25:56.325032] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:28.984 [2024-07-24 14:25:56.325147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.242 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.242 [2024-07-24 14:25:56.397231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.242 [2024-07-24 14:25:56.485422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.242 [2024-07-24 14:25:56.485487] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.242 [2024-07-24 14:25:56.485513] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.242 [2024-07-24 14:25:56.485527] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.242 [2024-07-24 14:25:56.485540] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.242 [2024-07-24 14:25:56.485572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.242 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.242 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:27:29.242 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.242 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.242 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 [2024-07-24 14:25:56.644474] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1abd960/0x1ac1e10) succeed. 00:27:29.501 [2024-07-24 14:25:56.656711] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1abee10/0x1b034a0) succeed. 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 null0 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 230433055d334368baf29d53507f7dd6 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 [2024-07-24 14:25:56.747940] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 nvme0n1 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 [ 00:27:29.501 { 00:27:29.501 "name": "nvme0n1", 00:27:29.501 "aliases": [ 00:27:29.501 "23043305-5d33-4368-baf2-9d53507f7dd6" 00:27:29.501 ], 00:27:29.501 "product_name": "NVMe disk", 00:27:29.501 "block_size": 512, 00:27:29.501 "num_blocks": 2097152, 00:27:29.501 "uuid": "23043305-5d33-4368-baf2-9d53507f7dd6", 00:27:29.501 "assigned_rate_limits": { 00:27:29.501 "rw_ios_per_sec": 0, 00:27:29.501 "rw_mbytes_per_sec": 0, 00:27:29.501 "r_mbytes_per_sec": 0, 00:27:29.501 "w_mbytes_per_sec": 0 00:27:29.501 }, 00:27:29.501 "claimed": false, 00:27:29.501 "zoned": false, 00:27:29.501 "supported_io_types": { 00:27:29.501 "read": true, 00:27:29.501 "write": true, 00:27:29.501 "unmap": false, 00:27:29.501 "write_zeroes": true, 00:27:29.501 "flush": true, 00:27:29.501 "reset": true, 00:27:29.501 "compare": true, 00:27:29.501 "compare_and_write": true, 00:27:29.501 "abort": true, 00:27:29.501 "nvme_admin": true, 00:27:29.501 "nvme_io": true 00:27:29.501 }, 00:27:29.501 "memory_domains": [ 00:27:29.501 { 00:27:29.501 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:27:29.501 "dma_device_type": 0 00:27:29.501 } 00:27:29.501 ], 00:27:29.501 "driver_specific": { 00:27:29.501 "nvme": [ 00:27:29.501 { 00:27:29.501 "trid": { 00:27:29.501 "trtype": "RDMA", 00:27:29.501 "adrfam": "IPv4", 00:27:29.501 "traddr": "192.168.100.8", 00:27:29.501 "trsvcid": "4420", 00:27:29.501 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:29.501 }, 00:27:29.501 "ctrlr_data": { 00:27:29.501 "cntlid": 1, 00:27:29.501 "vendor_id": "0x8086", 00:27:29.501 "model_number": "SPDK bdev Controller", 00:27:29.501 "serial_number": "00000000000000000000", 00:27:29.501 "firmware_revision": "24.05.1", 00:27:29.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.501 "oacs": { 00:27:29.501 "security": 0, 00:27:29.501 "format": 0, 00:27:29.501 "firmware": 0, 00:27:29.501 "ns_manage": 0 00:27:29.501 }, 00:27:29.501 "multi_ctrlr": true, 00:27:29.501 "ana_reporting": false 00:27:29.501 }, 00:27:29.501 "vs": { 00:27:29.501 "nvme_version": "1.3" 00:27:29.501 }, 00:27:29.501 "ns_data": { 00:27:29.501 "id": 1, 00:27:29.501 "can_share": true 00:27:29.501 } 00:27:29.501 } 00:27:29.501 ], 00:27:29.501 "mp_policy": "active_passive" 00:27:29.501 } 00:27:29.501 } 00:27:29.501 ] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.501 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.501 [2024-07-24 14:25:56.866017] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.759 [2024-07-24 14:25:56.888680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:29.760 [2024-07-24 14:25:56.914075] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 [ 00:27:29.760 { 00:27:29.760 "name": "nvme0n1", 00:27:29.760 "aliases": [ 00:27:29.760 "23043305-5d33-4368-baf2-9d53507f7dd6" 00:27:29.760 ], 00:27:29.760 "product_name": "NVMe disk", 00:27:29.760 "block_size": 512, 00:27:29.760 "num_blocks": 2097152, 00:27:29.760 "uuid": "23043305-5d33-4368-baf2-9d53507f7dd6", 00:27:29.760 "assigned_rate_limits": { 00:27:29.760 "rw_ios_per_sec": 0, 00:27:29.760 "rw_mbytes_per_sec": 0, 00:27:29.760 "r_mbytes_per_sec": 0, 00:27:29.760 "w_mbytes_per_sec": 0 00:27:29.760 }, 00:27:29.760 "claimed": false, 00:27:29.760 "zoned": false, 00:27:29.760 "supported_io_types": { 00:27:29.760 "read": true, 00:27:29.760 "write": true, 00:27:29.760 "unmap": false, 00:27:29.760 "write_zeroes": true, 00:27:29.760 "flush": true, 00:27:29.760 "reset": true, 00:27:29.760 "compare": true, 00:27:29.760 "compare_and_write": true, 00:27:29.760 "abort": true, 00:27:29.760 "nvme_admin": true, 00:27:29.760 "nvme_io": true 00:27:29.760 }, 00:27:29.760 "memory_domains": [ 00:27:29.760 { 00:27:29.760 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:27:29.760 "dma_device_type": 0 00:27:29.760 } 00:27:29.760 ], 00:27:29.760 "driver_specific": { 00:27:29.760 "nvme": [ 00:27:29.760 { 00:27:29.760 "trid": { 00:27:29.760 "trtype": "RDMA", 00:27:29.760 "adrfam": "IPv4", 00:27:29.760 "traddr": "192.168.100.8", 00:27:29.760 "trsvcid": "4420", 00:27:29.760 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:29.760 }, 00:27:29.760 "ctrlr_data": { 00:27:29.760 "cntlid": 2, 00:27:29.760 "vendor_id": "0x8086", 00:27:29.760 "model_number": "SPDK bdev Controller", 00:27:29.760 "serial_number": "00000000000000000000", 00:27:29.760 "firmware_revision": "24.05.1", 00:27:29.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.760 "oacs": { 00:27:29.760 "security": 0, 00:27:29.760 "format": 0, 00:27:29.760 "firmware": 0, 00:27:29.760 "ns_manage": 0 00:27:29.760 }, 00:27:29.760 "multi_ctrlr": true, 00:27:29.760 "ana_reporting": false 00:27:29.760 }, 00:27:29.760 "vs": { 00:27:29.760 "nvme_version": "1.3" 00:27:29.760 }, 00:27:29.760 "ns_data": { 00:27:29.760 "id": 1, 00:27:29.760 "can_share": true 00:27:29.760 } 00:27:29.760 } 00:27:29.760 ], 00:27:29.760 "mp_policy": "active_passive" 00:27:29.760 } 00:27:29.760 } 00:27:29.760 ] 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.A0aUkNpJm8 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.A0aUkNpJm8 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 [2024-07-24 14:25:56.976953] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.A0aUkNpJm8 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.A0aUkNpJm8 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 [2024-07-24 14:25:56.992954] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:29.760 nvme0n1 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 [ 00:27:29.760 { 00:27:29.760 "name": "nvme0n1", 00:27:29.760 "aliases": [ 00:27:29.760 "23043305-5d33-4368-baf2-9d53507f7dd6" 00:27:29.760 ], 00:27:29.760 "product_name": "NVMe disk", 00:27:29.760 "block_size": 512, 00:27:29.760 "num_blocks": 2097152, 00:27:29.760 "uuid": "23043305-5d33-4368-baf2-9d53507f7dd6", 00:27:29.760 "assigned_rate_limits": { 00:27:29.760 "rw_ios_per_sec": 0, 00:27:29.760 "rw_mbytes_per_sec": 0, 00:27:29.760 "r_mbytes_per_sec": 0, 00:27:29.760 "w_mbytes_per_sec": 0 00:27:29.760 }, 00:27:29.760 "claimed": false, 00:27:29.760 "zoned": false, 00:27:29.760 "supported_io_types": { 00:27:29.760 "read": true, 00:27:29.760 "write": true, 00:27:29.760 "unmap": false, 00:27:29.760 "write_zeroes": true, 00:27:29.760 "flush": true, 00:27:29.760 "reset": true, 00:27:29.760 "compare": true, 00:27:29.760 "compare_and_write": true, 00:27:29.760 "abort": true, 00:27:29.760 "nvme_admin": true, 00:27:29.760 "nvme_io": true 00:27:29.760 }, 00:27:29.760 "memory_domains": [ 00:27:29.760 { 00:27:29.760 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:27:29.760 "dma_device_type": 0 00:27:29.760 } 00:27:29.760 ], 00:27:29.760 "driver_specific": { 00:27:29.760 "nvme": [ 00:27:29.760 { 00:27:29.760 "trid": { 00:27:29.760 "trtype": "RDMA", 00:27:29.760 "adrfam": "IPv4", 00:27:29.760 "traddr": "192.168.100.8", 00:27:29.760 "trsvcid": "4421", 00:27:29.760 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:29.760 }, 00:27:29.760 "ctrlr_data": { 00:27:29.760 "cntlid": 3, 00:27:29.760 "vendor_id": "0x8086", 00:27:29.760 "model_number": "SPDK bdev Controller", 00:27:29.760 "serial_number": "00000000000000000000", 00:27:29.760 "firmware_revision": "24.05.1", 00:27:29.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.760 "oacs": { 00:27:29.760 "security": 0, 00:27:29.760 "format": 0, 00:27:29.760 "firmware": 0, 00:27:29.760 "ns_manage": 0 00:27:29.760 }, 00:27:29.760 "multi_ctrlr": true, 00:27:29.760 "ana_reporting": false 00:27:29.760 }, 00:27:29.760 "vs": { 00:27:29.760 "nvme_version": "1.3" 00:27:29.760 }, 00:27:29.760 "ns_data": { 00:27:29.760 "id": 1, 00:27:29.760 "can_share": true 00:27:29.760 } 00:27:29.760 } 00:27:29.760 ], 00:27:29.760 "mp_policy": "active_passive" 00:27:29.760 } 00:27:29.760 } 00:27:29.760 ] 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.A0aUkNpJm8 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.760 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:29.760 rmmod nvme_rdma 00:27:30.018 rmmod nvme_fabrics 00:27:30.018 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.018 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:30.018 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:30.018 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 193405 ']' 00:27:30.018 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 193405 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 193405 ']' 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 193405 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 193405 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 193405' 00:27:30.019 killing process with pid 193405 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 193405 00:27:30.019 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 193405 00:27:30.276 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.276 14:25:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:30.276 00:27:30.276 real 0m3.821s 00:27:30.276 user 0m2.135s 00:27:30.276 sys 0m2.081s 00:27:30.276 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:30.276 14:25:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:30.276 ************************************ 00:27:30.276 END TEST nvmf_async_init 00:27:30.276 ************************************ 00:27:30.276 14:25:57 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:27:30.276 14:25:57 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:30.276 14:25:57 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:30.276 14:25:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:30.276 ************************************ 00:27:30.276 START TEST dma 00:27:30.276 ************************************ 00:27:30.276 14:25:57 nvmf_rdma.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:27:30.276 * Looking for test storage... 00:27:30.276 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:30.276 14:25:57 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:30.277 14:25:57 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.277 14:25:57 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.277 14:25:57 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.277 14:25:57 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.277 14:25:57 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.277 14:25:57 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.277 14:25:57 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:27:30.277 14:25:57 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.277 14:25:57 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:27:30.277 14:25:57 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:27:30.277 14:25:57 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:27:30.277 14:25:57 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:27:30.277 14:25:57 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.277 14:25:57 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.277 14:25:57 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:30.277 14:25:57 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.277 14:25:57 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:27:32.804 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:27:32.804 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:27:32.804 Found net devices under 0000:81:00.0: mlx_0_0 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:27:32.804 Found net devices under 0000:81:00.1: mlx_0_1 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:32.804 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:32.804 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:27:32.804 altname enp129s0f0np0 00:27:32.804 inet 192.168.100.8/24 scope global mlx_0_0 00:27:32.804 valid_lft forever preferred_lft forever 00:27:32.804 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:32.805 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:32.805 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:27:32.805 altname enp129s0f1np1 00:27:32.805 inet 192.168.100.9/24 scope global mlx_0_1 00:27:32.805 valid_lft forever preferred_lft forever 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:32.805 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:33.062 192.168.100.9' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:33.062 192.168.100.9' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:33.062 192.168.100.9' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:33.062 14:26:00 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=195576 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:33.062 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 195576 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@827 -- # '[' -z 195576 ']' 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:33.062 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.063 [2024-07-24 14:26:00.248849] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:33.063 [2024-07-24 14:26:00.248941] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.063 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.063 [2024-07-24 14:26:00.322369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:33.063 [2024-07-24 14:26:00.417165] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.063 [2024-07-24 14:26:00.417229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.063 [2024-07-24 14:26:00.417246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.063 [2024-07-24 14:26:00.417260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.063 [2024-07-24 14:26:00.417271] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.063 [2024-07-24 14:26:00.417351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.063 [2024-07-24 14:26:00.417356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@860 -- # return 0 00:27:33.321 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.321 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.321 14:26:00 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.321 [2024-07-24 14:26:00.586238] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13b73f0/0x13bb8a0) succeed. 00:27:33.321 [2024-07-24 14:26:00.596807] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13b88a0/0x13fcf30) succeed. 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.321 14:26:00 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.321 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 Malloc0 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.579 14:26:00 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.579 14:26:00 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.579 14:26:00 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 [2024-07-24 14:26:00.774764] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:33.579 14:26:00 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.579 14:26:00 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:27:33.579 14:26:00 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.579 { 00:27:33.579 "params": { 00:27:33.579 "name": "Nvme$subsystem", 00:27:33.579 "trtype": "$TEST_TRANSPORT", 00:27:33.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.579 "adrfam": "ipv4", 00:27:33.579 "trsvcid": "$NVMF_PORT", 00:27:33.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.579 "hdgst": ${hdgst:-false}, 00:27:33.579 "ddgst": ${ddgst:-false} 00:27:33.579 }, 00:27:33.579 "method": "bdev_nvme_attach_controller" 00:27:33.579 } 00:27:33.579 EOF 00:27:33.579 )") 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:27:33.579 14:26:00 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:33.579 "params": { 00:27:33.579 "name": "Nvme0", 00:27:33.580 "trtype": "rdma", 00:27:33.580 "traddr": "192.168.100.8", 00:27:33.580 "adrfam": "ipv4", 00:27:33.580 "trsvcid": "4420", 00:27:33.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.580 "hdgst": false, 00:27:33.580 "ddgst": false 00:27:33.580 }, 00:27:33.580 "method": "bdev_nvme_attach_controller" 00:27:33.580 }' 00:27:33.580 [2024-07-24 14:26:00.817770] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:33.580 [2024-07-24 14:26:00.817882] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid195603 ] 00:27:33.580 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.580 [2024-07-24 14:26:00.894476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:33.837 [2024-07-24 14:26:00.983704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.837 [2024-07-24 14:26:00.983707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.097 bdev Nvme0n1 reports 1 memory domains 00:27:39.097 bdev Nvme0n1 supports RDMA memory domain 00:27:39.097 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:39.097 ========================================================================== 00:27:39.097 Latency [us] 00:27:39.097 IOPS MiB/s Average min max 00:27:39.097 Core 2: 18349.12 71.68 871.06 352.93 8500.91 00:27:39.097 Core 3: 18656.41 72.88 856.71 328.40 8475.46 00:27:39.097 ========================================================================== 00:27:39.097 Total : 37005.52 144.55 863.82 328.40 8500.91 00:27:39.097 00:27:39.097 Total operations: 185093, translate 185093 pull_push 0 memzero 0 00:27:39.097 14:26:06 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:27:39.097 14:26:06 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:27:39.097 14:26:06 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:27:39.355 [2024-07-24 14:26:06.495165] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:39.355 [2024-07-24 14:26:06.495253] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196269 ] 00:27:39.355 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.355 [2024-07-24 14:26:06.565157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.355 [2024-07-24 14:26:06.649211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.355 [2024-07-24 14:26:06.649215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.909 bdev Malloc0 reports 2 memory domains 00:27:45.909 bdev Malloc0 doesn't support RDMA memory domain 00:27:45.909 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:45.909 ========================================================================== 00:27:45.909 Latency [us] 00:27:45.909 IOPS MiB/s Average min max 00:27:45.909 Core 2: 12295.39 48.03 1300.32 478.31 1760.19 00:27:45.909 Core 3: 12431.33 48.56 1286.09 491.83 2297.85 00:27:45.909 ========================================================================== 00:27:45.909 Total : 24726.71 96.59 1293.17 478.31 2297.85 00:27:45.909 00:27:45.909 Total operations: 123688, translate 0 pull_push 494752 memzero 0 00:27:45.909 14:26:12 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:27:45.909 14:26:12 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:27:45.909 14:26:12 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:45.909 14:26:12 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:27:45.909 Ignoring -M option 00:27:45.909 [2024-07-24 14:26:12.075551] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:45.909 [2024-07-24 14:26:12.075624] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196914 ] 00:27:45.909 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.909 [2024-07-24 14:26:12.145891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:45.909 [2024-07-24 14:26:12.228851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.909 [2024-07-24 14:26:12.228855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.255 bdev f611fd5f-8c9f-42b6-95d0-567598da3839 reports 1 memory domains 00:27:51.255 bdev f611fd5f-8c9f-42b6-95d0-567598da3839 supports RDMA memory domain 00:27:51.255 Initialization complete, running randread IO for 5 sec on 2 cores 00:27:51.255 ========================================================================== 00:27:51.255 Latency [us] 00:27:51.255 IOPS MiB/s Average min max 00:27:51.255 Core 2: 63482.58 247.98 251.05 81.60 1852.40 00:27:51.255 Core 3: 65574.49 256.15 243.03 95.76 1858.52 00:27:51.255 ========================================================================== 00:27:51.255 Total : 129057.07 504.13 246.98 81.60 1858.52 00:27:51.255 00:27:51.255 Total operations: 645373, translate 0 pull_push 0 memzero 645373 00:27:51.255 14:26:17 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:27:51.255 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.255 [2024-07-24 14:26:17.864237] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:53.150 Initializing NVMe Controllers 00:27:53.150 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:27:53.150 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:53.150 Initialization complete. Launching workers. 00:27:53.150 ======================================================== 00:27:53.150 Latency(us) 00:27:53.150 Device Information : IOPS MiB/s Average min max 00:27:53.150 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.87 7979.72 7933.75 8002.61 00:27:53.150 ======================================================== 00:27:53.151 Total : 2016.00 7.87 7979.72 7933.75 8002.61 00:27:53.151 00:27:53.151 14:26:20 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:27:53.151 14:26:20 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:27:53.151 14:26:20 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:27:53.151 14:26:20 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:27:53.151 [2024-07-24 14:26:20.235334] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:27:53.151 [2024-07-24 14:26:20.235407] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid197953 ] 00:27:53.151 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.151 [2024-07-24 14:26:20.305718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:53.151 [2024-07-24 14:26:20.394559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.151 [2024-07-24 14:26:20.394563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.704 bdev 6d798d4f-7898-4d41-b446-0e381cca749a reports 1 memory domains 00:27:59.704 bdev 6d798d4f-7898-4d41-b446-0e381cca749a supports RDMA memory domain 00:27:59.704 Initialization complete, running randrw IO for 5 sec on 2 cores 00:27:59.704 ========================================================================== 00:27:59.704 Latency [us] 00:27:59.704 IOPS MiB/s Average min max 00:27:59.704 Core 2: 15589.54 60.90 1025.43 23.30 7925.05 00:27:59.704 Core 3: 15993.71 62.48 999.52 16.60 7588.16 00:27:59.704 ========================================================================== 00:27:59.704 Total : 31583.25 123.37 1012.31 16.60 7925.05 00:27:59.704 00:27:59.704 Total operations: 157931, translate 157824 pull_push 0 memzero 107 00:27:59.704 14:26:25 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:59.704 14:26:25 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:27:59.704 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:59.704 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:27:59.704 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:59.705 rmmod nvme_rdma 00:27:59.705 rmmod nvme_fabrics 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 195576 ']' 00:27:59.705 14:26:25 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 195576 00:27:59.705 14:26:25 nvmf_rdma.dma -- common/autotest_common.sh@946 -- # '[' -z 195576 ']' 00:27:59.705 14:26:25 nvmf_rdma.dma -- common/autotest_common.sh@950 -- # kill -0 195576 00:27:59.705 14:26:25 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # uname 00:27:59.705 14:26:25 nvmf_rdma.dma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:59.705 14:26:25 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 195576 00:27:59.705 14:26:26 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:59.705 14:26:26 nvmf_rdma.dma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:59.705 14:26:26 nvmf_rdma.dma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 195576' 00:27:59.705 killing process with pid 195576 00:27:59.705 14:26:26 nvmf_rdma.dma -- common/autotest_common.sh@965 -- # kill 195576 00:27:59.705 14:26:26 nvmf_rdma.dma -- common/autotest_common.sh@970 -- # wait 195576 00:27:59.705 14:26:26 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:59.705 14:26:26 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:59.705 00:27:59.705 real 0m28.929s 00:27:59.705 user 1m35.921s 00:27:59.705 sys 0m3.156s 00:27:59.705 14:26:26 nvmf_rdma.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:59.705 14:26:26 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:27:59.705 ************************************ 00:27:59.705 END TEST dma 00:27:59.705 ************************************ 00:27:59.705 14:26:26 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:59.705 14:26:26 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:59.705 14:26:26 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:59.705 14:26:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:59.705 ************************************ 00:27:59.705 START TEST nvmf_identify 00:27:59.705 ************************************ 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:27:59.705 * Looking for test storage... 00:27:59.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:59.705 14:26:26 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.607 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:28:01.608 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:28:01.608 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:28:01.608 Found net devices under 0000:81:00.0: mlx_0_0 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:28:01.608 Found net devices under 0000:81:00.1: mlx_0_1 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:01.608 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:01.608 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:28:01.608 altname enp129s0f0np0 00:28:01.608 inet 192.168.100.8/24 scope global mlx_0_0 00:28:01.608 valid_lft forever preferred_lft forever 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:01.608 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:01.608 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:28:01.608 altname enp129s0f1np1 00:28:01.608 inet 192.168.100.9/24 scope global mlx_0_1 00:28:01.608 valid_lft forever preferred_lft forever 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:28:01.608 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:01.609 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:01.867 192.168.100.9' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:01.867 192.168.100.9' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:01.867 192.168.100.9' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:01.867 14:26:28 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=200560 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 200560 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 200560 ']' 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:01.867 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:01.867 [2024-07-24 14:26:29.047476] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:01.867 [2024-07-24 14:26:29.047551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.867 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.867 [2024-07-24 14:26:29.113564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.867 [2024-07-24 14:26:29.198660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.867 [2024-07-24 14:26:29.198712] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.867 [2024-07-24 14:26:29.198736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.867 [2024-07-24 14:26:29.198747] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.867 [2024-07-24 14:26:29.198756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.867 [2024-07-24 14:26:29.198880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.867 [2024-07-24 14:26:29.198945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.867 [2024-07-24 14:26:29.199011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.867 [2024-07-24 14:26:29.199013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.126 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.126 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:02.126 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:02.126 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.126 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.126 [2024-07-24 14:26:29.348631] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21889e0/0x218ced0) succeed. 00:28:02.126 [2024-07-24 14:26:29.359576] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2189fd0/0x21ce560) succeed. 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.387 Malloc0 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.387 [2024-07-24 14:26:29.572039] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.387 [ 00:28:02.387 { 00:28:02.387 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:02.387 "subtype": "Discovery", 00:28:02.387 "listen_addresses": [ 00:28:02.387 { 00:28:02.387 "trtype": "RDMA", 00:28:02.387 "adrfam": "IPv4", 00:28:02.387 "traddr": "192.168.100.8", 00:28:02.387 "trsvcid": "4420" 00:28:02.387 } 00:28:02.387 ], 00:28:02.387 "allow_any_host": true, 00:28:02.387 "hosts": [] 00:28:02.387 }, 00:28:02.387 { 00:28:02.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.387 "subtype": "NVMe", 00:28:02.387 "listen_addresses": [ 00:28:02.387 { 00:28:02.387 "trtype": "RDMA", 00:28:02.387 "adrfam": "IPv4", 00:28:02.387 "traddr": "192.168.100.8", 00:28:02.387 "trsvcid": "4420" 00:28:02.387 } 00:28:02.387 ], 00:28:02.387 "allow_any_host": true, 00:28:02.387 "hosts": [], 00:28:02.387 "serial_number": "SPDK00000000000001", 00:28:02.387 "model_number": "SPDK bdev Controller", 00:28:02.387 "max_namespaces": 32, 00:28:02.387 "min_cntlid": 1, 00:28:02.387 "max_cntlid": 65519, 00:28:02.387 "namespaces": [ 00:28:02.387 { 00:28:02.387 "nsid": 1, 00:28:02.387 "bdev_name": "Malloc0", 00:28:02.387 "name": "Malloc0", 00:28:02.387 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:02.387 "eui64": "ABCDEF0123456789", 00:28:02.387 "uuid": "ff8b2035-87ec-4e55-ad5f-c476d86bb8b1" 00:28:02.387 } 00:28:02.387 ] 00:28:02.387 } 00:28:02.387 ] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.387 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:02.387 [2024-07-24 14:26:29.612798] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:02.387 [2024-07-24 14:26:29.612845] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200710 ] 00:28:02.387 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.387 [2024-07-24 14:26:29.664203] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:02.387 [2024-07-24 14:26:29.664297] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:28:02.387 [2024-07-24 14:26:29.664317] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:28:02.387 [2024-07-24 14:26:29.664325] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:28:02.387 [2024-07-24 14:26:29.664367] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:02.387 [2024-07-24 14:26:29.677302] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:28:02.387 [2024-07-24 14:26:29.693286] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:02.387 [2024-07-24 14:26:29.693303] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:28:02.387 [2024-07-24 14:26:29.693314] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:28:02.387 [2024-07-24 14:26:29.693328] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:28:02.387 [2024-07-24 14:26:29.693336] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:28:02.387 [2024-07-24 14:26:29.693344] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:28:02.387 [2024-07-24 14:26:29.693352] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:28:02.387 [2024-07-24 14:26:29.693362] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693371] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693380] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693389] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693397] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693405] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693413] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693422] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693431] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693440] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693449] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693458] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693466] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693475] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693485] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693495] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693504] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693513] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693522] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693532] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693541] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693550] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693560] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693570] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693580] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693589] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693598] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:28:02.388 [2024-07-24 14:26:29.693607] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:02.388 [2024-07-24 14:26:29.693616] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:28:02.388 [2024-07-24 14:26:29.693645] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.693667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181900 00:28:02.388 [2024-07-24 14:26:29.700797] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.700817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.700829] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.700839] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:02.388 [2024-07-24 14:26:29.700849] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:02.388 [2024-07-24 14:26:29.700859] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:02.388 [2024-07-24 14:26:29.700880] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.700893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.388 [2024-07-24 14:26:29.700930] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.700940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.700949] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:02.388 [2024-07-24 14:26:29.700957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.700967] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:02.388 [2024-07-24 14:26:29.700978] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.700989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.388 [2024-07-24 14:26:29.701009] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.701018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.701027] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:02.388 [2024-07-24 14:26:29.701035] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701045] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:02.388 [2024-07-24 14:26:29.701056] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.388 [2024-07-24 14:26:29.701100] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.701109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.701118] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:02.388 [2024-07-24 14:26:29.701125] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701142] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.388 [2024-07-24 14:26:29.701178] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.701186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.701195] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:02.388 [2024-07-24 14:26:29.701202] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:02.388 [2024-07-24 14:26:29.701210] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701219] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:02.388 [2024-07-24 14:26:29.701327] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:02.388 [2024-07-24 14:26:29.701335] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:02.388 [2024-07-24 14:26:29.701349] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.388 [2024-07-24 14:26:29.701385] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.701394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.701402] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:02.388 [2024-07-24 14:26:29.701410] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701422] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.388 [2024-07-24 14:26:29.701450] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.701459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.701466] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:02.388 [2024-07-24 14:26:29.701474] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:02.388 [2024-07-24 14:26:29.701482] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701491] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:02.388 [2024-07-24 14:26:29.701504] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:02.388 [2024-07-24 14:26:29.701518] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.388 [2024-07-24 14:26:29.701529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:28:02.388 [2024-07-24 14:26:29.701577] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.388 [2024-07-24 14:26:29.701586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.388 [2024-07-24 14:26:29.701599] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:02.388 [2024-07-24 14:26:29.701607] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:02.389 [2024-07-24 14:26:29.701614] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:02.389 [2024-07-24 14:26:29.701621] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:02.389 [2024-07-24 14:26:29.701629] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:02.389 [2024-07-24 14:26:29.701636] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:02.389 [2024-07-24 14:26:29.701644] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701658] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:02.389 [2024-07-24 14:26:29.701670] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.389 [2024-07-24 14:26:29.701710] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.389 [2024-07-24 14:26:29.701719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.389 [2024-07-24 14:26:29.701731] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.389 [2024-07-24 14:26:29.701750] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.389 [2024-07-24 14:26:29.701768] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.389 [2024-07-24 14:26:29.701810] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.389 [2024-07-24 14:26:29.701827] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:02.389 [2024-07-24 14:26:29.701850] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701863] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:02.389 [2024-07-24 14:26:29.701875] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.389 [2024-07-24 14:26:29.701909] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.389 [2024-07-24 14:26:29.701920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:28:02.389 [2024-07-24 14:26:29.701930] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:02.389 [2024-07-24 14:26:29.701938] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:02.389 [2024-07-24 14:26:29.701946] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701961] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.701974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:28:02.389 [2024-07-24 14:26:29.702007] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.389 [2024-07-24 14:26:29.702017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.389 [2024-07-24 14:26:29.702029] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702043] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:02.389 [2024-07-24 14:26:29.702079] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x181900 00:28:02.389 [2024-07-24 14:26:29.702121] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.389 [2024-07-24 14:26:29.702170] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.389 [2024-07-24 14:26:29.702180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.389 [2024-07-24 14:26:29.702197] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x181900 00:28:02.389 [2024-07-24 14:26:29.702217] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702226] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.389 [2024-07-24 14:26:29.702234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.389 [2024-07-24 14:26:29.702241] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702250] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.389 [2024-07-24 14:26:29.702257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.389 [2024-07-24 14:26:29.702271] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x181900 00:28:02.389 [2024-07-24 14:26:29.702295] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:28:02.389 [2024-07-24 14:26:29.702315] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.389 [2024-07-24 14:26:29.702324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.389 [2024-07-24 14:26:29.702340] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:28:02.389 ===================================================== 00:28:02.389 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:02.389 ===================================================== 00:28:02.389 Controller Capabilities/Features 00:28:02.389 ================================ 00:28:02.389 Vendor ID: 0000 00:28:02.389 Subsystem Vendor ID: 0000 00:28:02.389 Serial Number: .................... 00:28:02.389 Model Number: ........................................ 00:28:02.389 Firmware Version: 24.05.1 00:28:02.389 Recommended Arb Burst: 0 00:28:02.389 IEEE OUI Identifier: 00 00 00 00:28:02.389 Multi-path I/O 00:28:02.389 May have multiple subsystem ports: No 00:28:02.389 May have multiple controllers: No 00:28:02.389 Associated with SR-IOV VF: No 00:28:02.389 Max Data Transfer Size: 131072 00:28:02.389 Max Number of Namespaces: 0 00:28:02.389 Max Number of I/O Queues: 1024 00:28:02.389 NVMe Specification Version (VS): 1.3 00:28:02.389 NVMe Specification Version (Identify): 1.3 00:28:02.389 Maximum Queue Entries: 128 00:28:02.389 Contiguous Queues Required: Yes 00:28:02.389 Arbitration Mechanisms Supported 00:28:02.389 Weighted Round Robin: Not Supported 00:28:02.389 Vendor Specific: Not Supported 00:28:02.389 Reset Timeout: 15000 ms 00:28:02.389 Doorbell Stride: 4 bytes 00:28:02.389 NVM Subsystem Reset: Not Supported 00:28:02.389 Command Sets Supported 00:28:02.389 NVM Command Set: Supported 00:28:02.389 Boot Partition: Not Supported 00:28:02.389 Memory Page Size Minimum: 4096 bytes 00:28:02.389 Memory Page Size Maximum: 4096 bytes 00:28:02.389 Persistent Memory Region: Not Supported 00:28:02.389 Optional Asynchronous Events Supported 00:28:02.389 Namespace Attribute Notices: Not Supported 00:28:02.389 Firmware Activation Notices: Not Supported 00:28:02.389 ANA Change Notices: Not Supported 00:28:02.389 PLE Aggregate Log Change Notices: Not Supported 00:28:02.389 LBA Status Info Alert Notices: Not Supported 00:28:02.389 EGE Aggregate Log Change Notices: Not Supported 00:28:02.389 Normal NVM Subsystem Shutdown event: Not Supported 00:28:02.389 Zone Descriptor Change Notices: Not Supported 00:28:02.389 Discovery Log Change Notices: Supported 00:28:02.389 Controller Attributes 00:28:02.389 128-bit Host Identifier: Not Supported 00:28:02.389 Non-Operational Permissive Mode: Not Supported 00:28:02.389 NVM Sets: Not Supported 00:28:02.389 Read Recovery Levels: Not Supported 00:28:02.389 Endurance Groups: Not Supported 00:28:02.389 Predictable Latency Mode: Not Supported 00:28:02.389 Traffic Based Keep ALive: Not Supported 00:28:02.389 Namespace Granularity: Not Supported 00:28:02.389 SQ Associations: Not Supported 00:28:02.389 UUID List: Not Supported 00:28:02.389 Multi-Domain Subsystem: Not Supported 00:28:02.389 Fixed Capacity Management: Not Supported 00:28:02.389 Variable Capacity Management: Not Supported 00:28:02.389 Delete Endurance Group: Not Supported 00:28:02.389 Delete NVM Set: Not Supported 00:28:02.389 Extended LBA Formats Supported: Not Supported 00:28:02.389 Flexible Data Placement Supported: Not Supported 00:28:02.389 00:28:02.389 Controller Memory Buffer Support 00:28:02.389 ================================ 00:28:02.389 Supported: No 00:28:02.389 00:28:02.389 Persistent Memory Region Support 00:28:02.390 ================================ 00:28:02.390 Supported: No 00:28:02.390 00:28:02.390 Admin Command Set Attributes 00:28:02.390 ============================ 00:28:02.390 Security Send/Receive: Not Supported 00:28:02.390 Format NVM: Not Supported 00:28:02.390 Firmware Activate/Download: Not Supported 00:28:02.390 Namespace Management: Not Supported 00:28:02.390 Device Self-Test: Not Supported 00:28:02.390 Directives: Not Supported 00:28:02.390 NVMe-MI: Not Supported 00:28:02.390 Virtualization Management: Not Supported 00:28:02.390 Doorbell Buffer Config: Not Supported 00:28:02.390 Get LBA Status Capability: Not Supported 00:28:02.390 Command & Feature Lockdown Capability: Not Supported 00:28:02.390 Abort Command Limit: 1 00:28:02.390 Async Event Request Limit: 4 00:28:02.390 Number of Firmware Slots: N/A 00:28:02.390 Firmware Slot 1 Read-Only: N/A 00:28:02.390 Firmware Activation Without Reset: N/A 00:28:02.390 Multiple Update Detection Support: N/A 00:28:02.390 Firmware Update Granularity: No Information Provided 00:28:02.390 Per-Namespace SMART Log: No 00:28:02.390 Asymmetric Namespace Access Log Page: Not Supported 00:28:02.390 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:02.390 Command Effects Log Page: Not Supported 00:28:02.390 Get Log Page Extended Data: Supported 00:28:02.390 Telemetry Log Pages: Not Supported 00:28:02.390 Persistent Event Log Pages: Not Supported 00:28:02.390 Supported Log Pages Log Page: May Support 00:28:02.390 Commands Supported & Effects Log Page: Not Supported 00:28:02.390 Feature Identifiers & Effects Log Page:May Support 00:28:02.390 NVMe-MI Commands & Effects Log Page: May Support 00:28:02.390 Data Area 4 for Telemetry Log: Not Supported 00:28:02.390 Error Log Page Entries Supported: 128 00:28:02.390 Keep Alive: Not Supported 00:28:02.390 00:28:02.390 NVM Command Set Attributes 00:28:02.390 ========================== 00:28:02.390 Submission Queue Entry Size 00:28:02.390 Max: 1 00:28:02.390 Min: 1 00:28:02.390 Completion Queue Entry Size 00:28:02.390 Max: 1 00:28:02.390 Min: 1 00:28:02.390 Number of Namespaces: 0 00:28:02.390 Compare Command: Not Supported 00:28:02.390 Write Uncorrectable Command: Not Supported 00:28:02.390 Dataset Management Command: Not Supported 00:28:02.390 Write Zeroes Command: Not Supported 00:28:02.390 Set Features Save Field: Not Supported 00:28:02.390 Reservations: Not Supported 00:28:02.390 Timestamp: Not Supported 00:28:02.390 Copy: Not Supported 00:28:02.390 Volatile Write Cache: Not Present 00:28:02.390 Atomic Write Unit (Normal): 1 00:28:02.390 Atomic Write Unit (PFail): 1 00:28:02.390 Atomic Compare & Write Unit: 1 00:28:02.390 Fused Compare & Write: Supported 00:28:02.390 Scatter-Gather List 00:28:02.390 SGL Command Set: Supported 00:28:02.390 SGL Keyed: Supported 00:28:02.390 SGL Bit Bucket Descriptor: Not Supported 00:28:02.390 SGL Metadata Pointer: Not Supported 00:28:02.390 Oversized SGL: Not Supported 00:28:02.390 SGL Metadata Address: Not Supported 00:28:02.390 SGL Offset: Supported 00:28:02.390 Transport SGL Data Block: Not Supported 00:28:02.390 Replay Protected Memory Block: Not Supported 00:28:02.390 00:28:02.390 Firmware Slot Information 00:28:02.390 ========================= 00:28:02.390 Active slot: 0 00:28:02.390 00:28:02.390 00:28:02.390 Error Log 00:28:02.390 ========= 00:28:02.390 00:28:02.390 Active Namespaces 00:28:02.390 ================= 00:28:02.390 Discovery Log Page 00:28:02.390 ================== 00:28:02.390 Generation Counter: 2 00:28:02.390 Number of Records: 2 00:28:02.390 Record Format: 0 00:28:02.390 00:28:02.390 Discovery Log Entry 0 00:28:02.390 ---------------------- 00:28:02.390 Transport Type: 1 (RDMA) 00:28:02.390 Address Family: 1 (IPv4) 00:28:02.390 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:02.390 Entry Flags: 00:28:02.390 Duplicate Returned Information: 1 00:28:02.390 Explicit Persistent Connection Support for Discovery: 1 00:28:02.390 Transport Requirements: 00:28:02.390 Secure Channel: Not Required 00:28:02.390 Port ID: 0 (0x0000) 00:28:02.390 Controller ID: 65535 (0xffff) 00:28:02.390 Admin Max SQ Size: 128 00:28:02.390 Transport Service Identifier: 4420 00:28:02.390 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:02.390 Transport Address: 192.168.100.8 00:28:02.390 Transport Specific Address Subtype - RDMA 00:28:02.390 RDMA QP Service Type: 1 (Reliable Connected) 00:28:02.390 RDMA Provider Type: 1 (No provider specified) 00:28:02.390 RDMA CM Service: 1 (RDMA_CM) 00:28:02.390 Discovery Log Entry 1 00:28:02.390 ---------------------- 00:28:02.390 Transport Type: 1 (RDMA) 00:28:02.390 Address Family: 1 (IPv4) 00:28:02.390 Subsystem Type: 2 (NVM Subsystem) 00:28:02.390 Entry Flags: 00:28:02.390 Duplicate Returned Information: 0 00:28:02.390 Explicit Persistent Connection Support for Discovery: 0 00:28:02.390 Transport Requirements: 00:28:02.390 Secure Channel: Not Required 00:28:02.390 Port ID: 0 (0x0000) 00:28:02.390 Controller ID: 65535 (0xffff) 00:28:02.390 Admin Max SQ Size: [2024-07-24 14:26:29.702437] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:02.390 [2024-07-24 14:26:29.702455] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34497 doesn't match qid 00:28:02.390 [2024-07-24 14:26:29.702474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32758 cdw0:5 sqhd:25f0 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702483] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34497 doesn't match qid 00:28:02.390 [2024-07-24 14:26:29.702495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32758 cdw0:5 sqhd:25f0 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702504] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34497 doesn't match qid 00:28:02.390 [2024-07-24 14:26:29.702514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32758 cdw0:5 sqhd:25f0 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702522] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 34497 doesn't match qid 00:28:02.390 [2024-07-24 14:26:29.702533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32758 cdw0:5 sqhd:25f0 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702546] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.390 [2024-07-24 14:26:29.702579] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.390 [2024-07-24 14:26:29.702588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702600] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.390 [2024-07-24 14:26:29.702618] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702637] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.390 [2024-07-24 14:26:29.702646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702658] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:02.390 [2024-07-24 14:26:29.702667] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:02.390 [2024-07-24 14:26:29.702675] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702687] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.390 [2024-07-24 14:26:29.702724] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.390 [2024-07-24 14:26:29.702734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702746] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702759] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.390 [2024-07-24 14:26:29.702823] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.390 [2024-07-24 14:26:29.702848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:28:02.390 [2024-07-24 14:26:29.702857] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702871] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.390 [2024-07-24 14:26:29.702884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.390 [2024-07-24 14:26:29.702906] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.702915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.702924] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.702937] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.702949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.702972] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.702993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703003] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703017] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703049] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703068] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703080] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703116] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703135] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703163] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703200] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703236] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703250] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703281] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703298] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703310] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703344] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703361] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703373] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703402] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703419] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703431] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703475] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703493] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703505] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703542] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703559] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703572] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703606] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703644] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703658] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703690] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703708] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703721] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703756] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703774] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703787] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703831] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703849] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703862] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703900] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703918] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703931] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.703966] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.703975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.703984] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.703996] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.704008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.391 [2024-07-24 14:26:29.704032] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.391 [2024-07-24 14:26:29.704044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:28:02.391 [2024-07-24 14:26:29.704053] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:28:02.391 [2024-07-24 14:26:29.704066] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704098] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704130] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704143] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704193] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704210] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704222] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704251] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704267] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704279] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704308] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704325] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704336] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704368] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704384] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704396] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704428] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704445] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704457] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704486] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704503] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704515] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704544] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704560] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704572] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704603] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704619] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704631] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704662] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704678] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704691] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.704725] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.704734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.704742] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704753] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.704765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.708808] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.708822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.708832] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.708846] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.708859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.392 [2024-07-24 14:26:29.708902] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.392 [2024-07-24 14:26:29.708912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0011 p:0 m:0 dnr:0 00:28:02.392 [2024-07-24 14:26:29.708920] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:28:02.392 [2024-07-24 14:26:29.708931] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:02.392 128 00:28:02.392 Transport Service Identifier: 4420 00:28:02.392 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:02.392 Transport Address: 192.168.100.8 00:28:02.392 Transport Specific Address Subtype - RDMA 00:28:02.392 RDMA QP Service Type: 1 (Reliable Connected) 00:28:02.392 RDMA Provider Type: 1 (No provider specified) 00:28:02.392 RDMA CM Service: 1 (RDMA_CM) 00:28:02.653 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:02.653 [2024-07-24 14:26:29.778413] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:02.653 [2024-07-24 14:26:29.778455] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200712 ] 00:28:02.653 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.653 [2024-07-24 14:26:29.826964] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:02.653 [2024-07-24 14:26:29.827059] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:28:02.653 [2024-07-24 14:26:29.827081] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:28:02.653 [2024-07-24 14:26:29.827089] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:28:02.653 [2024-07-24 14:26:29.827139] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:02.653 [2024-07-24 14:26:29.840354] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:28:02.653 [2024-07-24 14:26:29.856288] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:02.653 [2024-07-24 14:26:29.856304] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:28:02.653 [2024-07-24 14:26:29.856312] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:28:02.653 [2024-07-24 14:26:29.856321] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:28:02.653 [2024-07-24 14:26:29.856329] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:28:02.653 [2024-07-24 14:26:29.856341] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856349] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856357] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856364] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856372] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856380] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856387] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856395] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856403] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856410] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856418] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856426] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856433] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856441] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856449] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856457] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856464] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856472] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856480] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856487] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856495] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856503] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856510] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856518] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856526] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856533] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856541] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856549] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856556] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:28:02.654 [2024-07-24 14:26:29.856567] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:28:02.654 [2024-07-24 14:26:29.856573] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:28:02.654 [2024-07-24 14:26:29.856594] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.856614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181900 00:28:02.654 [2024-07-24 14:26:29.863819] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.863837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:02.654 [2024-07-24 14:26:29.863847] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.863857] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:02.654 [2024-07-24 14:26:29.863866] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:02.654 [2024-07-24 14:26:29.863876] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:02.654 [2024-07-24 14:26:29.863893] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.863907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.654 [2024-07-24 14:26:29.863933] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.863943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:28:02.654 [2024-07-24 14:26:29.863951] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:02.654 [2024-07-24 14:26:29.863959] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.863969] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:02.654 [2024-07-24 14:26:29.863980] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.863991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.654 [2024-07-24 14:26:29.864014] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.864022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:28:02.654 [2024-07-24 14:26:29.864031] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:02.654 [2024-07-24 14:26:29.864039] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864049] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:02.654 [2024-07-24 14:26:29.864060] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.654 [2024-07-24 14:26:29.864096] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.864119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.654 [2024-07-24 14:26:29.864128] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:02.654 [2024-07-24 14:26:29.864136] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864149] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.654 [2024-07-24 14:26:29.864184] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.864194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.654 [2024-07-24 14:26:29.864201] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:02.654 [2024-07-24 14:26:29.864209] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:02.654 [2024-07-24 14:26:29.864216] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864226] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:02.654 [2024-07-24 14:26:29.864334] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:02.654 [2024-07-24 14:26:29.864341] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:02.654 [2024-07-24 14:26:29.864352] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.654 [2024-07-24 14:26:29.864385] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.864393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.654 [2024-07-24 14:26:29.864401] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:02.654 [2024-07-24 14:26:29.864409] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864421] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.654 [2024-07-24 14:26:29.864454] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.864462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:02.654 [2024-07-24 14:26:29.864470] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:02.654 [2024-07-24 14:26:29.864478] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:02.654 [2024-07-24 14:26:29.864485] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864494] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:02.654 [2024-07-24 14:26:29.864506] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:02.654 [2024-07-24 14:26:29.864520] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.654 [2024-07-24 14:26:29.864531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:28:02.654 [2024-07-24 14:26:29.864595] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.654 [2024-07-24 14:26:29.864604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.655 [2024-07-24 14:26:29.864619] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:02.655 [2024-07-24 14:26:29.864627] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:02.655 [2024-07-24 14:26:29.864634] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:02.655 [2024-07-24 14:26:29.864641] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:02.655 [2024-07-24 14:26:29.864648] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:02.655 [2024-07-24 14:26:29.864655] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.864663] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864676] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.864688] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.655 [2024-07-24 14:26:29.864720] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.655 [2024-07-24 14:26:29.864728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.655 [2024-07-24 14:26:29.864739] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.655 [2024-07-24 14:26:29.864758] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.655 [2024-07-24 14:26:29.864801] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.655 [2024-07-24 14:26:29.864822] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.655 [2024-07-24 14:26:29.864840] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.864847] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864860] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.864871] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.655 [2024-07-24 14:26:29.864902] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.655 [2024-07-24 14:26:29.864912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:28:02.655 [2024-07-24 14:26:29.864920] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:02.655 [2024-07-24 14:26:29.864932] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.864940] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864950] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.864960] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.864970] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.864981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.655 [2024-07-24 14:26:29.865011] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.655 [2024-07-24 14:26:29.865020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:28:02.655 [2024-07-24 14:26:29.865102] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865114] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865126] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865139] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181900 00:28:02.655 [2024-07-24 14:26:29.865182] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.655 [2024-07-24 14:26:29.865190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.655 [2024-07-24 14:26:29.865209] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:02.655 [2024-07-24 14:26:29.865224] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865232] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865244] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865256] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:28:02.655 [2024-07-24 14:26:29.865300] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.655 [2024-07-24 14:26:29.865308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.655 [2024-07-24 14:26:29.865324] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865333] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865344] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865360] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181900 00:28:02.655 [2024-07-24 14:26:29.865400] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.655 [2024-07-24 14:26:29.865409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.655 [2024-07-24 14:26:29.865422] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865430] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865440] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865453] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865463] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865471] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865479] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:02.655 [2024-07-24 14:26:29.865486] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:02.655 [2024-07-24 14:26:29.865494] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:02.655 [2024-07-24 14:26:29.865515] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.655 [2024-07-24 14:26:29.865527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.655 [2024-07-24 14:26:29.865537] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.656 [2024-07-24 14:26:29.865562] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.865571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.865580] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865589] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.865596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.865604] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865616] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.656 [2024-07-24 14:26:29.865648] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.865656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.865665] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865680] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.656 [2024-07-24 14:26:29.865714] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.865723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.865731] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865743] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.656 [2024-07-24 14:26:29.865802] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.865813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.865821] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865838] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x181900 00:28:02.656 [2024-07-24 14:26:29.865863] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x181900 00:28:02.656 [2024-07-24 14:26:29.865885] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x181900 00:28:02.656 [2024-07-24 14:26:29.865907] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x181900 00:28:02.656 [2024-07-24 14:26:29.865930] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.865939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.865957] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865967] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.865975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.865987] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.865997] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.866005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.866016] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181900 00:28:02.656 [2024-07-24 14:26:29.866028] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.656 [2024-07-24 14:26:29.866036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.656 [2024-07-24 14:26:29.866050] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181900 00:28:02.656 ===================================================== 00:28:02.656 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.656 ===================================================== 00:28:02.656 Controller Capabilities/Features 00:28:02.656 ================================ 00:28:02.656 Vendor ID: 8086 00:28:02.656 Subsystem Vendor ID: 8086 00:28:02.656 Serial Number: SPDK00000000000001 00:28:02.656 Model Number: SPDK bdev Controller 00:28:02.656 Firmware Version: 24.05.1 00:28:02.656 Recommended Arb Burst: 6 00:28:02.656 IEEE OUI Identifier: e4 d2 5c 00:28:02.656 Multi-path I/O 00:28:02.656 May have multiple subsystem ports: Yes 00:28:02.656 May have multiple controllers: Yes 00:28:02.656 Associated with SR-IOV VF: No 00:28:02.656 Max Data Transfer Size: 131072 00:28:02.656 Max Number of Namespaces: 32 00:28:02.656 Max Number of I/O Queues: 127 00:28:02.656 NVMe Specification Version (VS): 1.3 00:28:02.656 NVMe Specification Version (Identify): 1.3 00:28:02.656 Maximum Queue Entries: 128 00:28:02.656 Contiguous Queues Required: Yes 00:28:02.656 Arbitration Mechanisms Supported 00:28:02.656 Weighted Round Robin: Not Supported 00:28:02.656 Vendor Specific: Not Supported 00:28:02.656 Reset Timeout: 15000 ms 00:28:02.656 Doorbell Stride: 4 bytes 00:28:02.656 NVM Subsystem Reset: Not Supported 00:28:02.656 Command Sets Supported 00:28:02.656 NVM Command Set: Supported 00:28:02.656 Boot Partition: Not Supported 00:28:02.656 Memory Page Size Minimum: 4096 bytes 00:28:02.656 Memory Page Size Maximum: 4096 bytes 00:28:02.656 Persistent Memory Region: Not Supported 00:28:02.656 Optional Asynchronous Events Supported 00:28:02.656 Namespace Attribute Notices: Supported 00:28:02.656 Firmware Activation Notices: Not Supported 00:28:02.656 ANA Change Notices: Not Supported 00:28:02.656 PLE Aggregate Log Change Notices: Not Supported 00:28:02.656 LBA Status Info Alert Notices: Not Supported 00:28:02.656 EGE Aggregate Log Change Notices: Not Supported 00:28:02.656 Normal NVM Subsystem Shutdown event: Not Supported 00:28:02.656 Zone Descriptor Change Notices: Not Supported 00:28:02.656 Discovery Log Change Notices: Not Supported 00:28:02.656 Controller Attributes 00:28:02.656 128-bit Host Identifier: Supported 00:28:02.656 Non-Operational Permissive Mode: Not Supported 00:28:02.656 NVM Sets: Not Supported 00:28:02.656 Read Recovery Levels: Not Supported 00:28:02.656 Endurance Groups: Not Supported 00:28:02.656 Predictable Latency Mode: Not Supported 00:28:02.656 Traffic Based Keep ALive: Not Supported 00:28:02.656 Namespace Granularity: Not Supported 00:28:02.656 SQ Associations: Not Supported 00:28:02.656 UUID List: Not Supported 00:28:02.656 Multi-Domain Subsystem: Not Supported 00:28:02.656 Fixed Capacity Management: Not Supported 00:28:02.656 Variable Capacity Management: Not Supported 00:28:02.656 Delete Endurance Group: Not Supported 00:28:02.656 Delete NVM Set: Not Supported 00:28:02.656 Extended LBA Formats Supported: Not Supported 00:28:02.656 Flexible Data Placement Supported: Not Supported 00:28:02.656 00:28:02.656 Controller Memory Buffer Support 00:28:02.656 ================================ 00:28:02.656 Supported: No 00:28:02.656 00:28:02.656 Persistent Memory Region Support 00:28:02.656 ================================ 00:28:02.656 Supported: No 00:28:02.656 00:28:02.656 Admin Command Set Attributes 00:28:02.656 ============================ 00:28:02.656 Security Send/Receive: Not Supported 00:28:02.656 Format NVM: Not Supported 00:28:02.656 Firmware Activate/Download: Not Supported 00:28:02.656 Namespace Management: Not Supported 00:28:02.656 Device Self-Test: Not Supported 00:28:02.656 Directives: Not Supported 00:28:02.656 NVMe-MI: Not Supported 00:28:02.656 Virtualization Management: Not Supported 00:28:02.656 Doorbell Buffer Config: Not Supported 00:28:02.656 Get LBA Status Capability: Not Supported 00:28:02.656 Command & Feature Lockdown Capability: Not Supported 00:28:02.656 Abort Command Limit: 4 00:28:02.657 Async Event Request Limit: 4 00:28:02.657 Number of Firmware Slots: N/A 00:28:02.657 Firmware Slot 1 Read-Only: N/A 00:28:02.657 Firmware Activation Without Reset: N/A 00:28:02.657 Multiple Update Detection Support: N/A 00:28:02.657 Firmware Update Granularity: No Information Provided 00:28:02.657 Per-Namespace SMART Log: No 00:28:02.657 Asymmetric Namespace Access Log Page: Not Supported 00:28:02.657 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:02.657 Command Effects Log Page: Supported 00:28:02.657 Get Log Page Extended Data: Supported 00:28:02.657 Telemetry Log Pages: Not Supported 00:28:02.657 Persistent Event Log Pages: Not Supported 00:28:02.657 Supported Log Pages Log Page: May Support 00:28:02.657 Commands Supported & Effects Log Page: Not Supported 00:28:02.657 Feature Identifiers & Effects Log Page:May Support 00:28:02.657 NVMe-MI Commands & Effects Log Page: May Support 00:28:02.657 Data Area 4 for Telemetry Log: Not Supported 00:28:02.657 Error Log Page Entries Supported: 128 00:28:02.657 Keep Alive: Supported 00:28:02.657 Keep Alive Granularity: 10000 ms 00:28:02.657 00:28:02.657 NVM Command Set Attributes 00:28:02.657 ========================== 00:28:02.657 Submission Queue Entry Size 00:28:02.657 Max: 64 00:28:02.657 Min: 64 00:28:02.657 Completion Queue Entry Size 00:28:02.657 Max: 16 00:28:02.657 Min: 16 00:28:02.657 Number of Namespaces: 32 00:28:02.657 Compare Command: Supported 00:28:02.657 Write Uncorrectable Command: Not Supported 00:28:02.657 Dataset Management Command: Supported 00:28:02.657 Write Zeroes Command: Supported 00:28:02.657 Set Features Save Field: Not Supported 00:28:02.657 Reservations: Supported 00:28:02.657 Timestamp: Not Supported 00:28:02.657 Copy: Supported 00:28:02.657 Volatile Write Cache: Present 00:28:02.657 Atomic Write Unit (Normal): 1 00:28:02.657 Atomic Write Unit (PFail): 1 00:28:02.657 Atomic Compare & Write Unit: 1 00:28:02.657 Fused Compare & Write: Supported 00:28:02.657 Scatter-Gather List 00:28:02.657 SGL Command Set: Supported 00:28:02.657 SGL Keyed: Supported 00:28:02.657 SGL Bit Bucket Descriptor: Not Supported 00:28:02.657 SGL Metadata Pointer: Not Supported 00:28:02.657 Oversized SGL: Not Supported 00:28:02.657 SGL Metadata Address: Not Supported 00:28:02.657 SGL Offset: Supported 00:28:02.657 Transport SGL Data Block: Not Supported 00:28:02.657 Replay Protected Memory Block: Not Supported 00:28:02.657 00:28:02.657 Firmware Slot Information 00:28:02.657 ========================= 00:28:02.657 Active slot: 1 00:28:02.657 Slot 1 Firmware Revision: 24.05.1 00:28:02.657 00:28:02.657 00:28:02.657 Commands Supported and Effects 00:28:02.657 ============================== 00:28:02.657 Admin Commands 00:28:02.657 -------------- 00:28:02.657 Get Log Page (02h): Supported 00:28:02.657 Identify (06h): Supported 00:28:02.657 Abort (08h): Supported 00:28:02.657 Set Features (09h): Supported 00:28:02.657 Get Features (0Ah): Supported 00:28:02.657 Asynchronous Event Request (0Ch): Supported 00:28:02.657 Keep Alive (18h): Supported 00:28:02.657 I/O Commands 00:28:02.657 ------------ 00:28:02.657 Flush (00h): Supported LBA-Change 00:28:02.657 Write (01h): Supported LBA-Change 00:28:02.657 Read (02h): Supported 00:28:02.657 Compare (05h): Supported 00:28:02.657 Write Zeroes (08h): Supported LBA-Change 00:28:02.657 Dataset Management (09h): Supported LBA-Change 00:28:02.657 Copy (19h): Supported LBA-Change 00:28:02.657 Unknown (79h): Supported LBA-Change 00:28:02.657 Unknown (7Ah): Supported 00:28:02.657 00:28:02.657 Error Log 00:28:02.657 ========= 00:28:02.657 00:28:02.657 Arbitration 00:28:02.657 =========== 00:28:02.657 Arbitration Burst: 1 00:28:02.657 00:28:02.657 Power Management 00:28:02.657 ================ 00:28:02.657 Number of Power States: 1 00:28:02.657 Current Power State: Power State #0 00:28:02.657 Power State #0: 00:28:02.657 Max Power: 0.00 W 00:28:02.657 Non-Operational State: Operational 00:28:02.657 Entry Latency: Not Reported 00:28:02.657 Exit Latency: Not Reported 00:28:02.657 Relative Read Throughput: 0 00:28:02.657 Relative Read Latency: 0 00:28:02.657 Relative Write Throughput: 0 00:28:02.657 Relative Write Latency: 0 00:28:02.657 Idle Power: Not Reported 00:28:02.657 Active Power: Not Reported 00:28:02.657 Non-Operational Permissive Mode: Not Supported 00:28:02.657 00:28:02.657 Health Information 00:28:02.657 ================== 00:28:02.657 Critical Warnings: 00:28:02.657 Available Spare Space: OK 00:28:02.657 Temperature: OK 00:28:02.657 Device Reliability: OK 00:28:02.657 Read Only: No 00:28:02.657 Volatile Memory Backup: OK 00:28:02.657 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:02.657 Temperature Threshol[2024-07-24 14:26:29.866171] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181900 00:28:02.657 [2024-07-24 14:26:29.866187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.657 [2024-07-24 14:26:29.866212] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.657 [2024-07-24 14:26:29.866222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.657 [2024-07-24 14:26:29.866230] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181900 00:28:02.657 [2024-07-24 14:26:29.866266] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:02.657 [2024-07-24 14:26:29.866281] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 56112 doesn't match qid 00:28:02.657 [2024-07-24 14:26:29.866299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:45f0 p:0 m:0 dnr:0 00:28:02.657 [2024-07-24 14:26:29.866309] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 56112 doesn't match qid 00:28:02.657 [2024-07-24 14:26:29.866320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:45f0 p:0 m:0 dnr:0 00:28:02.657 [2024-07-24 14:26:29.866329] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 56112 doesn't match qid 00:28:02.657 [2024-07-24 14:26:29.866339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:45f0 p:0 m:0 dnr:0 00:28:02.657 [2024-07-24 14:26:29.866348] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 56112 doesn't match qid 00:28:02.657 [2024-07-24 14:26:29.866359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32741 cdw0:5 sqhd:45f0 p:0 m:0 dnr:0 00:28:02.657 [2024-07-24 14:26:29.866371] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181900 00:28:02.657 [2024-07-24 14:26:29.866383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.657 [2024-07-24 14:26:29.866406] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.657 [2024-07-24 14:26:29.866415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866426] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866448] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866468] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866485] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:02.658 [2024-07-24 14:26:29.866492] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:02.658 [2024-07-24 14:26:29.866500] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866513] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866551] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866568] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866580] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866612] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866629] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866642] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866673] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866691] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866704] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866733] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866750] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866763] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866824] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866853] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866866] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866906] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866924] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866941] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.866954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.866977] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.866986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.866995] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867008] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.867041] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.867049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.867058] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867071] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.867126] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.867135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.867143] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867171] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.867206] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.867215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.867223] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867235] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.867268] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.867276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.867285] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867296] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.867329] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.867337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:28:02.658 [2024-07-24 14:26:29.867345] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867360] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.658 [2024-07-24 14:26:29.867372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.658 [2024-07-24 14:26:29.867391] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.658 [2024-07-24 14:26:29.867400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.867408] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867420] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.659 [2024-07-24 14:26:29.867452] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.659 [2024-07-24 14:26:29.867460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.867469] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867480] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.659 [2024-07-24 14:26:29.867511] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.659 [2024-07-24 14:26:29.867519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.867527] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867539] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.659 [2024-07-24 14:26:29.867572] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.659 [2024-07-24 14:26:29.867580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.867588] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867600] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.659 [2024-07-24 14:26:29.867630] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.659 [2024-07-24 14:26:29.867639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.867647] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867659] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.659 [2024-07-24 14:26:29.867689] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.659 [2024-07-24 14:26:29.867697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.867708] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867720] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.867732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.659 [2024-07-24 14:26:29.867751] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.659 [2024-07-24 14:26:29.867760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.867768] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.871810] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.871827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:28:02.659 [2024-07-24 14:26:29.871849] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:28:02.659 [2024-07-24 14:26:29.871858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000e p:0 m:0 dnr:0 00:28:02.659 [2024-07-24 14:26:29.871867] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181900 00:28:02.659 [2024-07-24 14:26:29.871877] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:02.659 d: 0 Kelvin (-273 Celsius) 00:28:02.659 Available Spare: 0% 00:28:02.659 Available Spare Threshold: 0% 00:28:02.659 Life Percentage Used: 0% 00:28:02.659 Data Units Read: 0 00:28:02.659 Data Units Written: 0 00:28:02.659 Host Read Commands: 0 00:28:02.659 Host Write Commands: 0 00:28:02.659 Controller Busy Time: 0 minutes 00:28:02.659 Power Cycles: 0 00:28:02.659 Power On Hours: 0 hours 00:28:02.659 Unsafe Shutdowns: 0 00:28:02.659 Unrecoverable Media Errors: 0 00:28:02.659 Lifetime Error Log Entries: 0 00:28:02.659 Warning Temperature Time: 0 minutes 00:28:02.659 Critical Temperature Time: 0 minutes 00:28:02.659 00:28:02.659 Number of Queues 00:28:02.659 ================ 00:28:02.659 Number of I/O Submission Queues: 127 00:28:02.659 Number of I/O Completion Queues: 127 00:28:02.659 00:28:02.659 Active Namespaces 00:28:02.659 ================= 00:28:02.659 Namespace ID:1 00:28:02.659 Error Recovery Timeout: Unlimited 00:28:02.659 Command Set Identifier: NVM (00h) 00:28:02.659 Deallocate: Supported 00:28:02.659 Deallocated/Unwritten Error: Not Supported 00:28:02.659 Deallocated Read Value: Unknown 00:28:02.659 Deallocate in Write Zeroes: Not Supported 00:28:02.659 Deallocated Guard Field: 0xFFFF 00:28:02.659 Flush: Supported 00:28:02.659 Reservation: Supported 00:28:02.659 Namespace Sharing Capabilities: Multiple Controllers 00:28:02.659 Size (in LBAs): 131072 (0GiB) 00:28:02.659 Capacity (in LBAs): 131072 (0GiB) 00:28:02.659 Utilization (in LBAs): 131072 (0GiB) 00:28:02.659 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:02.659 EUI64: ABCDEF0123456789 00:28:02.659 UUID: ff8b2035-87ec-4e55-ad5f-c476d86bb8b1 00:28:02.659 Thin Provisioning: Not Supported 00:28:02.659 Per-NS Atomic Units: Yes 00:28:02.659 Atomic Boundary Size (Normal): 0 00:28:02.659 Atomic Boundary Size (PFail): 0 00:28:02.659 Atomic Boundary Offset: 0 00:28:02.659 Maximum Single Source Range Length: 65535 00:28:02.659 Maximum Copy Length: 65535 00:28:02.659 Maximum Source Range Count: 1 00:28:02.659 NGUID/EUI64 Never Reused: No 00:28:02.659 Namespace Write Protected: No 00:28:02.659 Number of LBA Formats: 1 00:28:02.659 Current LBA Format: LBA Format #00 00:28:02.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:02.659 00:28:02.659 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:02.659 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.659 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.659 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:02.659 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.659 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:02.659 14:26:29 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:02.660 rmmod nvme_rdma 00:28:02.660 rmmod nvme_fabrics 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 200560 ']' 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 200560 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 200560 ']' 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 200560 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:02.660 14:26:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 200560 00:28:02.660 14:26:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:02.660 14:26:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:02.660 14:26:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 200560' 00:28:02.660 killing process with pid 200560 00:28:02.660 14:26:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@965 -- # kill 200560 00:28:02.660 14:26:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@970 -- # wait 200560 00:28:03.227 14:26:30 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.227 14:26:30 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:03.227 00:28:03.227 real 0m3.895s 00:28:03.227 user 0m5.284s 00:28:03.227 sys 0m2.082s 00:28:03.227 14:26:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.227 14:26:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:03.227 ************************************ 00:28:03.227 END TEST nvmf_identify 00:28:03.227 ************************************ 00:28:03.227 14:26:30 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:28:03.227 14:26:30 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:03.227 14:26:30 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.227 14:26:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:03.227 ************************************ 00:28:03.227 START TEST nvmf_perf 00:28:03.227 ************************************ 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:28:03.227 * Looking for test storage... 00:28:03.227 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.227 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:03.228 14:26:30 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:28:05.757 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:28:05.757 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:28:05.757 Found net devices under 0000:81:00.0: mlx_0_0 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:28:05.757 Found net devices under 0000:81:00.1: mlx_0_1 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:05.757 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:05.758 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.758 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:28:05.758 altname enp129s0f0np0 00:28:05.758 inet 192.168.100.8/24 scope global mlx_0_0 00:28:05.758 valid_lft forever preferred_lft forever 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:05.758 14:26:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:05.758 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.758 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:28:05.758 altname enp129s0f1np1 00:28:05.758 inet 192.168.100.9/24 scope global mlx_0_1 00:28:05.758 valid_lft forever preferred_lft forever 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:05.758 192.168.100.9' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:05.758 192.168.100.9' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:05.758 192.168.100.9' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=202775 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 202775 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 202775 ']' 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:05.758 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:05.758 [2024-07-24 14:26:33.096278] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:28:05.758 [2024-07-24 14:26:33.096362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.017 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.017 [2024-07-24 14:26:33.170228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.017 [2024-07-24 14:26:33.263210] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.017 [2024-07-24 14:26:33.263270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.017 [2024-07-24 14:26:33.263295] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.017 [2024-07-24 14:26:33.263309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.017 [2024-07-24 14:26:33.263320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.017 [2024-07-24 14:26:33.265815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.017 [2024-07-24 14:26:33.265866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.017 [2024-07-24 14:26:33.265951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.017 [2024-07-24 14:26:33.265955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:06.274 14:26:33 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:09.588 14:26:36 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:09.588 14:26:36 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:09.588 14:26:36 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:28:09.588 14:26:36 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:09.846 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:09.846 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:28:09.846 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:09.846 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:28:09.846 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:28:10.104 [2024-07-24 14:26:37.250207] rdma.c:2726:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:28:10.104 [2024-07-24 14:26:37.272826] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9884e0/0x996290) succeed. 00:28:10.104 [2024-07-24 14:26:37.283959] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x989b20/0xa16310) succeed. 00:28:10.104 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:10.361 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:10.361 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:10.619 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:10.619 14:26:37 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:10.877 14:26:38 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:11.135 [2024-07-24 14:26:38.366382] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:11.135 14:26:38 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:28:11.393 14:26:38 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:28:11.393 14:26:38 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:28:11.393 14:26:38 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:11.393 14:26:38 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:28:12.765 Initializing NVMe Controllers 00:28:12.765 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:28:12.765 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:28:12.765 Initialization complete. Launching workers. 00:28:12.765 ======================================================== 00:28:12.765 Latency(us) 00:28:12.765 Device Information : IOPS MiB/s Average min max 00:28:12.765 PCIE (0000:84:00.0) NSID 1 from core 0: 85253.08 333.02 374.73 10.38 4557.43 00:28:12.765 ======================================================== 00:28:12.765 Total : 85253.08 333.02 374.73 10.38 4557.43 00:28:12.765 00:28:12.766 14:26:39 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:12.766 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.043 Initializing NVMe Controllers 00:28:16.043 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.043 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:16.043 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:16.043 Initialization complete. Launching workers. 00:28:16.043 ======================================================== 00:28:16.043 Latency(us) 00:28:16.043 Device Information : IOPS MiB/s Average min max 00:28:16.043 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5490.51 21.45 181.13 65.16 4131.83 00:28:16.043 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4439.25 17.34 225.00 85.91 4137.31 00:28:16.043 ======================================================== 00:28:16.043 Total : 9929.76 38.79 200.74 65.16 4137.31 00:28:16.043 00:28:16.043 14:26:43 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:16.043 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.320 Initializing NVMe Controllers 00:28:19.320 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.320 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:19.320 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:19.320 Initialization complete. Launching workers. 00:28:19.320 ======================================================== 00:28:19.320 Latency(us) 00:28:19.320 Device Information : IOPS MiB/s Average min max 00:28:19.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14305.00 55.88 2237.70 605.24 9295.78 00:28:19.320 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3968.00 15.50 8099.66 7651.72 16054.05 00:28:19.320 ======================================================== 00:28:19.320 Total : 18273.00 71.38 3510.63 605.24 16054.05 00:28:19.320 00:28:19.320 14:26:46 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:28:19.320 14:26:46 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:19.320 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.578 Initializing NVMe Controllers 00:28:24.578 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.578 Controller IO queue size 128, less than required. 00:28:24.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.578 Controller IO queue size 128, less than required. 00:28:24.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.578 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.578 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:24.578 Initialization complete. Launching workers. 00:28:24.578 ======================================================== 00:28:24.578 Latency(us) 00:28:24.578 Device Information : IOPS MiB/s Average min max 00:28:24.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3238.29 809.57 39612.26 22081.49 90099.88 00:28:24.578 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2630.02 657.50 48367.93 7395.60 149580.31 00:28:24.578 ======================================================== 00:28:24.578 Total : 5868.31 1467.08 43536.31 7395.60 149580.31 00:28:24.578 00:28:24.578 14:26:51 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:28:24.578 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.578 No valid NVMe controllers or AIO or URING devices found 00:28:24.578 Initializing NVMe Controllers 00:28:24.578 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.578 Controller IO queue size 128, less than required. 00:28:24.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.578 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:24.578 Controller IO queue size 128, less than required. 00:28:24.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.578 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:24.578 WARNING: Some requested NVMe devices were skipped 00:28:24.578 14:26:51 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:28:24.578 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.759 Initializing NVMe Controllers 00:28:28.759 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.759 Controller IO queue size 128, less than required. 00:28:28.759 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.759 Controller IO queue size 128, less than required. 00:28:28.759 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:28.759 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.759 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.759 Initialization complete. Launching workers. 00:28:28.759 00:28:28.759 ==================== 00:28:28.759 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:28.759 RDMA transport: 00:28:28.759 dev name: mlx5_0 00:28:28.759 polls: 329139 00:28:28.759 idle_polls: 326777 00:28:28.759 completions: 33586 00:28:28.759 queued_requests: 1 00:28:28.759 total_send_wrs: 16793 00:28:28.759 send_doorbell_updates: 2166 00:28:28.759 total_recv_wrs: 16920 00:28:28.759 recv_doorbell_updates: 2167 00:28:28.759 --------------------------------- 00:28:28.759 00:28:28.759 ==================== 00:28:28.759 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:28.759 RDMA transport: 00:28:28.759 dev name: mlx5_0 00:28:28.759 polls: 333599 00:28:28.759 idle_polls: 333327 00:28:28.759 completions: 15910 00:28:28.759 queued_requests: 1 00:28:28.759 total_send_wrs: 7955 00:28:28.759 send_doorbell_updates: 254 00:28:28.759 total_recv_wrs: 8082 00:28:28.759 recv_doorbell_updates: 257 00:28:28.759 --------------------------------- 00:28:28.759 ======================================================== 00:28:28.759 Latency(us) 00:28:28.759 Device Information : IOPS MiB/s Average min max 00:28:28.759 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4191.06 1047.76 30551.15 15636.86 72553.27 00:28:28.759 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1985.21 496.30 64735.51 31692.32 119975.52 00:28:28.759 ======================================================== 00:28:28.760 Total : 6176.27 1544.07 41538.88 15636.86 119975.52 00:28:28.760 00:28:28.760 14:26:55 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:28.760 14:26:55 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:28.760 14:26:56 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:28.760 14:26:56 nvmf_rdma.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:84:00.0 ']' 00:28:28.760 14:26:56 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:32.038 14:26:59 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9e36a367-475e-47c7-9c1d-43d6ece9f90c 00:28:32.038 14:26:59 nvmf_rdma.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9e36a367-475e-47c7-9c1d-43d6ece9f90c 00:28:32.038 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=9e36a367-475e-47c7-9c1d-43d6ece9f90c 00:28:32.038 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:32.038 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:32.038 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:32.038 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:32.326 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:32.326 { 00:28:32.326 "uuid": "9e36a367-475e-47c7-9c1d-43d6ece9f90c", 00:28:32.326 "name": "lvs_0", 00:28:32.326 "base_bdev": "Nvme0n1", 00:28:32.326 "total_data_clusters": 238234, 00:28:32.326 "free_clusters": 238234, 00:28:32.326 "block_size": 512, 00:28:32.326 "cluster_size": 4194304 00:28:32.326 } 00:28:32.326 ]' 00:28:32.326 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9e36a367-475e-47c7-9c1d-43d6ece9f90c") .free_clusters' 00:28:32.326 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:32.326 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9e36a367-475e-47c7-9c1d-43d6ece9f90c") .cluster_size' 00:28:32.583 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:32.583 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:32.583 14:26:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:32.583 952936 00:28:32.583 14:26:59 nvmf_rdma.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:32.583 14:26:59 nvmf_rdma.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:32.583 14:26:59 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e36a367-475e-47c7-9c1d-43d6ece9f90c lbd_0 20480 00:28:33.147 14:27:00 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b4680c15-fe9d-4bfd-811b-92397015e398 00:28:33.147 14:27:00 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b4680c15-fe9d-4bfd-811b-92397015e398 lvs_n_0 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=0404a70b-7d4d-41a3-8634-ba09f0ee289f 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 0404a70b-7d4d-41a3-8634-ba09f0ee289f 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=0404a70b-7d4d-41a3-8634-ba09f0ee289f 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:34.079 { 00:28:34.079 "uuid": "9e36a367-475e-47c7-9c1d-43d6ece9f90c", 00:28:34.079 "name": "lvs_0", 00:28:34.079 "base_bdev": "Nvme0n1", 00:28:34.079 "total_data_clusters": 238234, 00:28:34.079 "free_clusters": 233114, 00:28:34.079 "block_size": 512, 00:28:34.079 "cluster_size": 4194304 00:28:34.079 }, 00:28:34.079 { 00:28:34.079 "uuid": "0404a70b-7d4d-41a3-8634-ba09f0ee289f", 00:28:34.079 "name": "lvs_n_0", 00:28:34.079 "base_bdev": "b4680c15-fe9d-4bfd-811b-92397015e398", 00:28:34.079 "total_data_clusters": 5114, 00:28:34.079 "free_clusters": 5114, 00:28:34.079 "block_size": 512, 00:28:34.079 "cluster_size": 4194304 00:28:34.079 } 00:28:34.079 ]' 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="0404a70b-7d4d-41a3-8634-ba09f0ee289f") .free_clusters' 00:28:34.079 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:34.080 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="0404a70b-7d4d-41a3-8634-ba09f0ee289f") .cluster_size' 00:28:34.337 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:34.337 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:34.337 14:27:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:34.337 20456 00:28:34.337 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:34.337 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0404a70b-7d4d-41a3-8634-ba09f0ee289f lbd_nest_0 20456 00:28:34.595 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=43af0491-ab42-43cf-ba9a-d4f805a01206 00:28:34.595 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.852 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:34.852 14:27:01 nvmf_rdma.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 43af0491-ab42-43cf-ba9a-d4f805a01206 00:28:35.110 14:27:02 nvmf_rdma.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:35.368 14:27:02 nvmf_rdma.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:35.368 14:27:02 nvmf_rdma.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:35.368 14:27:02 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:35.368 14:27:02 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:35.368 14:27:02 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:35.368 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.562 Initializing NVMe Controllers 00:28:47.562 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.562 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.562 Initialization complete. Launching workers. 00:28:47.562 ======================================================== 00:28:47.562 Latency(us) 00:28:47.562 Device Information : IOPS MiB/s Average min max 00:28:47.562 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4265.89 2.08 233.96 94.21 6121.99 00:28:47.562 ======================================================== 00:28:47.562 Total : 4265.89 2.08 233.96 94.21 6121.99 00:28:47.562 00:28:47.562 14:27:13 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:47.562 14:27:13 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:47.562 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.761 Initializing NVMe Controllers 00:28:59.761 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.761 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.761 Initialization complete. Launching workers. 00:28:59.761 ======================================================== 00:28:59.761 Latency(us) 00:28:59.761 Device Information : IOPS MiB/s Average min max 00:28:59.761 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2210.80 276.35 451.84 190.89 7174.29 00:28:59.761 ======================================================== 00:28:59.761 Total : 2210.80 276.35 451.84 190.89 7174.29 00:28:59.761 00:28:59.761 14:27:25 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:59.761 14:27:25 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:59.761 14:27:25 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:28:59.761 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.770 Initializing NVMe Controllers 00:29:09.770 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:09.770 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:09.770 Initialization complete. Launching workers. 00:29:09.770 ======================================================== 00:29:09.770 Latency(us) 00:29:09.770 Device Information : IOPS MiB/s Average min max 00:29:09.770 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8872.60 4.33 3607.61 1392.64 9189.28 00:29:09.770 ======================================================== 00:29:09.770 Total : 8872.60 4.33 3607.61 1392.64 9189.28 00:29:09.770 00:29:09.770 14:27:36 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:09.770 14:27:36 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:09.770 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.964 Initializing NVMe Controllers 00:29:21.964 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.964 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.964 Initialization complete. Launching workers. 00:29:21.964 ======================================================== 00:29:21.964 Latency(us) 00:29:21.964 Device Information : IOPS MiB/s Average min max 00:29:21.964 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3396.99 424.62 9419.68 5861.89 23855.10 00:29:21.964 ======================================================== 00:29:21.964 Total : 3396.99 424.62 9419.68 5861.89 23855.10 00:29:21.964 00:29:21.964 14:27:47 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:21.964 14:27:47 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:21.964 14:27:47 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:21.964 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.167 Initializing NVMe Controllers 00:29:34.167 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.167 Controller IO queue size 128, less than required. 00:29:34.167 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.167 Initialization complete. Launching workers. 00:29:34.167 ======================================================== 00:29:34.167 Latency(us) 00:29:34.167 Device Information : IOPS MiB/s Average min max 00:29:34.167 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14143.98 6.91 9050.40 2639.61 17191.79 00:29:34.167 ======================================================== 00:29:34.167 Total : 14143.98 6.91 9050.40 2639.61 17191.79 00:29:34.167 00:29:34.167 14:27:59 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:34.167 14:27:59 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:34.167 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.138 Initializing NVMe Controllers 00:29:44.138 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.138 Controller IO queue size 128, less than required. 00:29:44.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:44.138 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:44.138 Initialization complete. Launching workers. 00:29:44.138 ======================================================== 00:29:44.138 Latency(us) 00:29:44.138 Device Information : IOPS MiB/s Average min max 00:29:44.138 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6703.23 837.90 19099.70 652.67 111508.97 00:29:44.138 ======================================================== 00:29:44.138 Total : 6703.23 837.90 19099.70 652.67 111508.97 00:29:44.138 00:29:44.138 14:28:10 nvmf_rdma.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:44.138 14:28:10 nvmf_rdma.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 43af0491-ab42-43cf-ba9a-d4f805a01206 00:29:44.397 14:28:11 nvmf_rdma.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:44.654 14:28:11 nvmf_rdma.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4680c15-fe9d-4bfd-811b-92397015e398 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:29:45.220 rmmod nvme_rdma 00:29:45.220 rmmod nvme_fabrics 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 202775 ']' 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 202775 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 202775 ']' 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 202775 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 202775 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 202775' 00:29:45.220 killing process with pid 202775 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@965 -- # kill 202775 00:29:45.220 14:28:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@970 -- # wait 202775 00:29:47.119 14:28:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.119 14:28:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:29:47.119 00:29:47.119 real 1m43.856s 00:29:47.119 user 6m47.382s 00:29:47.119 sys 0m3.724s 00:29:47.119 14:28:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:47.119 14:28:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.119 ************************************ 00:29:47.119 END TEST nvmf_perf 00:29:47.119 ************************************ 00:29:47.119 14:28:14 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:47.119 14:28:14 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:47.119 14:28:14 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:47.119 14:28:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:29:47.119 ************************************ 00:29:47.119 START TEST nvmf_fio_host 00:29:47.119 ************************************ 00:29:47.119 14:28:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:29:47.119 * Looking for test storage... 00:29:47.119 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:47.119 14:28:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:47.119 14:28:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.119 14:28:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:47.120 14:28:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:29:49.692 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:29:49.692 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.692 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:29:49.693 Found net devices under 0000:81:00.0: mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:29:49.693 Found net devices under 0000:81:00.1: mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:29:49.693 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.693 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:29:49.693 altname enp129s0f0np0 00:29:49.693 inet 192.168.100.8/24 scope global mlx_0_0 00:29:49.693 valid_lft forever preferred_lft forever 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:29:49.693 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:49.693 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:29:49.693 altname enp129s0f1np1 00:29:49.693 inet 192.168.100.9/24 scope global mlx_0_1 00:29:49.693 valid_lft forever preferred_lft forever 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:29:49.693 192.168.100.9' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:29:49.693 192.168.100.9' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:29:49.693 192.168.100.9' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:29:49.693 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=217055 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 217055 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 217055 ']' 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:49.694 14:28:16 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.694 [2024-07-24 14:28:16.929838] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:29:49.694 [2024-07-24 14:28:16.929922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.694 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.694 [2024-07-24 14:28:17.004947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.952 [2024-07-24 14:28:17.097404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.952 [2024-07-24 14:28:17.097467] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.952 [2024-07-24 14:28:17.097495] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.952 [2024-07-24 14:28:17.097509] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.952 [2024-07-24 14:28:17.097522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.952 [2024-07-24 14:28:17.097588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.952 [2024-07-24 14:28:17.097664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.952 [2024-07-24 14:28:17.097760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.952 [2024-07-24 14:28:17.097762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.952 14:28:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:49.952 14:28:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:49.952 14:28:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:50.210 [2024-07-24 14:28:17.468523] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb969e0/0xb9aed0) succeed. 00:29:50.210 [2024-07-24 14:28:17.479319] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb97fd0/0xbdc560) succeed. 00:29:50.468 14:28:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:50.468 14:28:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.468 14:28:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.469 14:28:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:50.726 Malloc1 00:29:50.726 14:28:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.985 14:28:18 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:51.243 14:28:18 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:51.500 [2024-07-24 14:28:18.637919] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:51.500 14:28:18 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:51.758 14:28:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:29:51.758 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:51.758 fio-3.35 00:29:51.758 Starting 1 thread 00:29:52.015 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.544 00:29:54.544 test: (groupid=0, jobs=1): err= 0: pid=217408: Wed Jul 24 14:28:21 2024 00:29:54.544 read: IOPS=13.5k, BW=52.6MiB/s (55.1MB/s)(105MiB/2005msec) 00:29:54.544 slat (nsec): min=1805, max=32309, avg=2012.22, stdev=763.57 00:29:54.544 clat (usec): min=1920, max=8543, avg=4729.43, stdev=164.76 00:29:54.544 lat (usec): min=1931, max=8545, avg=4731.44, stdev=164.73 00:29:54.544 clat percentiles (usec): 00:29:54.544 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 4621], 20.00th=[ 4686], 00:29:54.544 | 30.00th=[ 4686], 40.00th=[ 4686], 50.00th=[ 4686], 60.00th=[ 4752], 00:29:54.544 | 70.00th=[ 4752], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4883], 00:29:54.544 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 6652], 99.95th=[ 7832], 00:29:54.544 | 99.99th=[ 8455] 00:29:54.544 bw ( KiB/s): min=52976, max=54264, per=100.00%, avg=53842.00, stdev=590.21, samples=4 00:29:54.544 iops : min=13244, max=13566, avg=13460.50, stdev=147.55, samples=4 00:29:54.544 write: IOPS=13.4k, BW=52.5MiB/s (55.1MB/s)(105MiB/2005msec); 0 zone resets 00:29:54.544 slat (nsec): min=1875, max=32380, avg=2113.56, stdev=821.59 00:29:54.544 clat (usec): min=1936, max=8552, avg=4727.79, stdev=168.75 00:29:54.544 lat (usec): min=1941, max=8554, avg=4729.91, stdev=168.73 00:29:54.544 clat percentiles (usec): 00:29:54.544 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 4621], 20.00th=[ 4686], 00:29:54.544 | 30.00th=[ 4686], 40.00th=[ 4686], 50.00th=[ 4686], 60.00th=[ 4752], 00:29:54.544 | 70.00th=[ 4752], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4883], 00:29:54.544 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 6718], 99.95th=[ 7898], 00:29:54.544 | 99.99th=[ 8586] 00:29:54.544 bw ( KiB/s): min=53256, max=54064, per=100.00%, avg=53806.00, stdev=370.97, samples=4 00:29:54.544 iops : min=13314, max=13516, avg=13451.50, stdev=92.74, samples=4 00:29:54.544 lat (msec) : 2=0.02%, 4=0.19%, 10=99.78% 00:29:54.544 cpu : usr=99.55%, sys=0.00%, ctx=16, majf=0, minf=4 00:29:54.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:54.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:54.544 issued rwts: total=26987,26964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:54.544 00:29:54.544 Run status group 0 (all jobs): 00:29:54.544 READ: bw=52.6MiB/s (55.1MB/s), 52.6MiB/s-52.6MiB/s (55.1MB/s-55.1MB/s), io=105MiB (111MB), run=2005-2005msec 00:29:54.544 WRITE: bw=52.5MiB/s (55.1MB/s), 52.5MiB/s-52.5MiB/s (55.1MB/s-55.1MB/s), io=105MiB (110MB), run=2005-2005msec 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.544 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:54.545 14:28:21 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:29:54.545 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:54.545 fio-3.35 00:29:54.545 Starting 1 thread 00:29:54.545 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.072 00:29:57.072 test: (groupid=0, jobs=1): err= 0: pid=217860: Wed Jul 24 14:28:23 2024 00:29:57.072 read: IOPS=10.8k, BW=169MiB/s (178MB/s)(336MiB/1981msec) 00:29:57.072 slat (nsec): min=2786, max=39879, avg=3316.26, stdev=1380.58 00:29:57.072 clat (usec): min=587, max=10501, avg=2057.37, stdev=1596.34 00:29:57.072 lat (usec): min=590, max=10504, avg=2060.68, stdev=1596.66 00:29:57.072 clat percentiles (usec): 00:29:57.072 | 1.00th=[ 906], 5.00th=[ 1045], 10.00th=[ 1123], 20.00th=[ 1237], 00:29:57.072 | 30.00th=[ 1336], 40.00th=[ 1434], 50.00th=[ 1549], 60.00th=[ 1696], 00:29:57.072 | 70.00th=[ 1844], 80.00th=[ 2040], 90.00th=[ 3818], 95.00th=[ 6587], 00:29:57.072 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[ 9765], 99.95th=[ 9896], 00:29:57.072 | 99.99th=[10421] 00:29:57.072 bw ( KiB/s): min=84832, max=92608, per=50.16%, avg=87048.00, stdev=3716.35, samples=4 00:29:57.072 iops : min= 5302, max= 5788, avg=5440.50, stdev=232.27, samples=4 00:29:57.072 write: IOPS=6138, BW=95.9MiB/s (101MB/s)(177MiB/1843msec); 0 zone resets 00:29:57.072 slat (nsec): min=30459, max=70323, avg=33511.51, stdev=5052.72 00:29:57.072 clat (usec): min=5932, max=27155, avg=17212.90, stdev=2557.62 00:29:57.072 lat (usec): min=5965, max=27191, avg=17246.41, stdev=2557.47 00:29:57.072 clat percentiles (usec): 00:29:57.072 | 1.00th=[ 9765], 5.00th=[13304], 10.00th=[14091], 20.00th=[15270], 00:29:57.072 | 30.00th=[15926], 40.00th=[16712], 50.00th=[17171], 60.00th=[17957], 00:29:57.072 | 70.00th=[18482], 80.00th=[19006], 90.00th=[20055], 95.00th=[21365], 00:29:57.072 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25560], 99.95th=[25822], 00:29:57.072 | 99.99th=[27132] 00:29:57.072 bw ( KiB/s): min=85504, max=96256, per=91.30%, avg=89680.00, stdev=4652.86, samples=4 00:29:57.072 iops : min= 5344, max= 6016, avg=5605.00, stdev=290.80, samples=4 00:29:57.072 lat (usec) : 750=0.03%, 1000=2.06% 00:29:57.072 lat (msec) : 2=49.32%, 4=7.65%, 10=6.78%, 20=30.37%, 50=3.79% 00:29:57.072 cpu : usr=97.36%, sys=1.10%, ctx=144, majf=0, minf=2 00:29:57.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:57.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:57.072 issued rwts: total=21488,11314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:57.072 00:29:57.072 Run status group 0 (all jobs): 00:29:57.072 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=336MiB (352MB), run=1981-1981msec 00:29:57.072 WRITE: bw=95.9MiB/s (101MB/s), 95.9MiB/s-95.9MiB/s (101MB/s-101MB/s), io=177MiB (185MB), run=1843-1843msec 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:29:57.072 14:28:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 -i 192.168.100.8 00:30:00.353 Nvme0n1 00:30:00.353 14:28:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=9b108a3c-3a85-43d4-964c-84c797682a2e 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 9b108a3c-3a85-43d4-964c-84c797682a2e 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=9b108a3c-3a85-43d4-964c-84c797682a2e 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:03.638 { 00:30:03.638 "uuid": "9b108a3c-3a85-43d4-964c-84c797682a2e", 00:30:03.638 "name": "lvs_0", 00:30:03.638 "base_bdev": "Nvme0n1", 00:30:03.638 "total_data_clusters": 930, 00:30:03.638 "free_clusters": 930, 00:30:03.638 "block_size": 512, 00:30:03.638 "cluster_size": 1073741824 00:30:03.638 } 00:30:03.638 ]' 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9b108a3c-3a85-43d4-964c-84c797682a2e") .free_clusters' 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9b108a3c-3a85-43d4-964c-84c797682a2e") .cluster_size' 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:03.638 952320 00:30:03.638 14:28:30 nvmf_rdma.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:03.897 78a6254d-cf90-4af9-b4fb-d6061748a44c 00:30:03.897 14:28:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:04.156 14:28:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:04.415 14:28:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:04.673 14:28:31 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:04.673 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:04.673 fio-3.35 00:30:04.673 Starting 1 thread 00:30:04.931 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.494 00:30:07.494 test: (groupid=0, jobs=1): err= 0: pid=219140: Wed Jul 24 14:28:34 2024 00:30:07.494 read: IOPS=7971, BW=31.1MiB/s (32.6MB/s)(62.5MiB/2006msec) 00:30:07.494 slat (nsec): min=1757, max=45047, avg=2125.97, stdev=1383.43 00:30:07.494 clat (usec): min=497, max=169743, avg=7990.50, stdev=10299.90 00:30:07.494 lat (usec): min=499, max=169788, avg=7992.63, stdev=10299.97 00:30:07.494 clat percentiles (msec): 00:30:07.494 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:07.494 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:30:07.494 | 70.00th=[ 8], 80.00th=[ 8], 90.00th=[ 8], 95.00th=[ 8], 00:30:07.494 | 99.00th=[ 9], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 169], 00:30:07.494 | 99.99th=[ 169] 00:30:07.494 bw ( KiB/s): min=22160, max=35344, per=99.92%, avg=31858.00, stdev=6467.82, samples=4 00:30:07.494 iops : min= 5540, max= 8836, avg=7964.50, stdev=1616.95, samples=4 00:30:07.494 write: IOPS=7947, BW=31.0MiB/s (32.6MB/s)(62.3MiB/2006msec); 0 zone resets 00:30:07.494 slat (nsec): min=1846, max=33246, avg=2236.24, stdev=1493.19 00:30:07.494 clat (usec): min=211, max=170081, avg=7937.52, stdev=9652.49 00:30:07.494 lat (usec): min=213, max=170085, avg=7939.76, stdev=9652.54 00:30:07.494 clat percentiles (msec): 00:30:07.494 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:07.494 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:30:07.494 | 70.00th=[ 8], 80.00th=[ 8], 90.00th=[ 8], 95.00th=[ 8], 00:30:07.494 | 99.00th=[ 9], 99.50th=[ 12], 99.90th=[ 171], 99.95th=[ 171], 00:30:07.494 | 99.99th=[ 171] 00:30:07.494 bw ( KiB/s): min=23040, max=34808, per=99.88%, avg=31752.00, stdev=5810.69, samples=4 00:30:07.494 iops : min= 5760, max= 8702, avg=7938.00, stdev=1452.67, samples=4 00:30:07.494 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:30:07.494 lat (msec) : 2=0.02%, 4=0.15%, 10=99.23%, 20=0.17%, 250=0.40% 00:30:07.494 cpu : usr=99.45%, sys=0.10%, ctx=16, majf=0, minf=4 00:30:07.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:07.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:07.494 issued rwts: total=15990,15942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:07.494 00:30:07.494 Run status group 0 (all jobs): 00:30:07.494 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.5MiB (65.5MB), run=2006-2006msec 00:30:07.494 WRITE: bw=31.0MiB/s (32.6MB/s), 31.0MiB/s-31.0MiB/s (32.6MB/s-32.6MB/s), io=62.3MiB (65.3MB), run=2006-2006msec 00:30:07.494 14:28:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:07.494 14:28:34 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:08.426 14:28:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=31d6ded4-4115-4b1b-aed6-2015f2ca5a3b 00:30:08.426 14:28:35 nvmf_rdma.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 31d6ded4-4115-4b1b-aed6-2015f2ca5a3b 00:30:08.426 14:28:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=31d6ded4-4115-4b1b-aed6-2015f2ca5a3b 00:30:08.426 14:28:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:08.426 14:28:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:08.426 14:28:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:08.426 14:28:35 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:08.685 14:28:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:08.685 { 00:30:08.685 "uuid": "9b108a3c-3a85-43d4-964c-84c797682a2e", 00:30:08.685 "name": "lvs_0", 00:30:08.685 "base_bdev": "Nvme0n1", 00:30:08.685 "total_data_clusters": 930, 00:30:08.685 "free_clusters": 0, 00:30:08.685 "block_size": 512, 00:30:08.685 "cluster_size": 1073741824 00:30:08.685 }, 00:30:08.685 { 00:30:08.685 "uuid": "31d6ded4-4115-4b1b-aed6-2015f2ca5a3b", 00:30:08.685 "name": "lvs_n_0", 00:30:08.685 "base_bdev": "78a6254d-cf90-4af9-b4fb-d6061748a44c", 00:30:08.685 "total_data_clusters": 237847, 00:30:08.685 "free_clusters": 237847, 00:30:08.685 "block_size": 512, 00:30:08.685 "cluster_size": 4194304 00:30:08.685 } 00:30:08.685 ]' 00:30:08.685 14:28:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="31d6ded4-4115-4b1b-aed6-2015f2ca5a3b") .free_clusters' 00:30:08.685 14:28:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:08.943 14:28:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="31d6ded4-4115-4b1b-aed6-2015f2ca5a3b") .cluster_size' 00:30:08.943 14:28:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:08.943 14:28:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:08.943 14:28:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:08.943 951388 00:30:08.943 14:28:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:09.508 65f73b51-eb17-4213-8e89-526a808e5ae1 00:30:09.508 14:28:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:09.766 14:28:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:10.024 14:28:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:10.283 14:28:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:30:10.542 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:10.542 fio-3.35 00:30:10.542 Starting 1 thread 00:30:10.542 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.070 00:30:13.070 test: (groupid=0, jobs=1): err= 0: pid=219880: Wed Jul 24 14:28:40 2024 00:30:13.070 read: IOPS=7448, BW=29.1MiB/s (30.5MB/s)(58.4MiB/2008msec) 00:30:13.070 slat (nsec): min=1824, max=38846, avg=2199.31, stdev=1546.19 00:30:13.070 clat (usec): min=4579, max=15309, avg=8519.82, stdev=356.78 00:30:13.070 lat (usec): min=4582, max=15311, avg=8522.02, stdev=356.76 00:30:13.070 clat percentiles (usec): 00:30:13.070 | 1.00th=[ 7504], 5.00th=[ 8291], 10.00th=[ 8356], 20.00th=[ 8455], 00:30:13.070 | 30.00th=[ 8455], 40.00th=[ 8455], 50.00th=[ 8455], 60.00th=[ 8586], 00:30:13.070 | 70.00th=[ 8586], 80.00th=[ 8586], 90.00th=[ 8717], 95.00th=[ 8848], 00:30:13.070 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[13042], 99.95th=[15139], 00:30:13.070 | 99.99th=[15270] 00:30:13.070 bw ( KiB/s): min=28680, max=30520, per=100.00%, avg=29820.00, stdev=810.95, samples=4 00:30:13.070 iops : min= 7170, max= 7630, avg=7455.00, stdev=202.74, samples=4 00:30:13.070 write: IOPS=7442, BW=29.1MiB/s (30.5MB/s)(58.4MiB/2008msec); 0 zone resets 00:30:13.070 slat (nsec): min=1894, max=34370, avg=2306.59, stdev=1502.50 00:30:13.070 clat (usec): min=4596, max=15318, avg=8562.27, stdev=393.35 00:30:13.070 lat (usec): min=4600, max=15320, avg=8564.58, stdev=393.34 00:30:13.070 clat percentiles (usec): 00:30:13.070 | 1.00th=[ 7504], 5.00th=[ 8356], 10.00th=[ 8455], 20.00th=[ 8455], 00:30:13.070 | 30.00th=[ 8455], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8586], 00:30:13.070 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 8717], 95.00th=[ 8848], 00:30:13.070 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[14222], 99.95th=[15139], 00:30:13.070 | 99.99th=[15270] 00:30:13.070 bw ( KiB/s): min=29552, max=29936, per=99.86%, avg=29730.00, stdev=169.38, samples=4 00:30:13.070 iops : min= 7388, max= 7484, avg=7432.50, stdev=42.34, samples=4 00:30:13.070 lat (msec) : 10=99.66%, 20=0.34% 00:30:13.070 cpu : usr=99.55%, sys=0.00%, ctx=15, majf=0, minf=4 00:30:13.070 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:13.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.070 issued rwts: total=14957,14945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.070 00:30:13.070 Run status group 0 (all jobs): 00:30:13.070 READ: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=58.4MiB (61.3MB), run=2008-2008msec 00:30:13.070 WRITE: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=58.4MiB (61.2MB), run=2008-2008msec 00:30:13.070 14:28:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:13.070 14:28:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:13.070 14:28:40 nvmf_rdma.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:17.258 14:28:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:17.258 14:28:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:20.541 14:28:47 nvmf_rdma.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:20.541 14:28:47 nvmf_rdma.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:22.446 rmmod nvme_rdma 00:30:22.446 rmmod nvme_fabrics 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 217055 ']' 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 217055 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 217055 ']' 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 217055 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 217055 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 217055' 00:30:22.446 killing process with pid 217055 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 217055 00:30:22.446 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 217055 00:30:22.705 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:22.705 14:28:49 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:22.705 00:30:22.705 real 0m35.589s 00:30:22.705 user 2m26.377s 00:30:22.705 sys 0m3.777s 00:30:22.705 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:22.705 14:28:49 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.705 ************************************ 00:30:22.705 END TEST nvmf_fio_host 00:30:22.705 ************************************ 00:30:22.705 14:28:49 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:30:22.705 14:28:49 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:22.705 14:28:49 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:22.705 14:28:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:22.705 ************************************ 00:30:22.705 START TEST nvmf_failover 00:30:22.705 ************************************ 00:30:22.705 14:28:49 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:30:22.705 * Looking for test storage... 00:30:22.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:22.705 14:28:49 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.705 14:28:49 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:22.705 14:28:49 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.705 14:28:49 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.705 14:28:49 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:22.706 14:28:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:30:25.239 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:30:25.239 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:30:25.239 Found net devices under 0000:81:00.0: mlx_0_0 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:30:25.239 Found net devices under 0000:81:00.1: mlx_0_1 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:25.239 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:25.240 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:25.240 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:30:25.240 altname enp129s0f0np0 00:30:25.240 inet 192.168.100.8/24 scope global mlx_0_0 00:30:25.240 valid_lft forever preferred_lft forever 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:25.240 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:25.240 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:30:25.240 altname enp129s0f1np1 00:30:25.240 inet 192.168.100.9/24 scope global mlx_0_1 00:30:25.240 valid_lft forever preferred_lft forever 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:25.240 192.168.100.9' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:25.240 192.168.100.9' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:25.240 192.168.100.9' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=223251 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 223251 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 223251 ']' 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:25.240 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.240 [2024-07-24 14:28:52.590461] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:25.240 [2024-07-24 14:28:52.590543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.499 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.499 [2024-07-24 14:28:52.660747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:25.499 [2024-07-24 14:28:52.744613] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.499 [2024-07-24 14:28:52.744686] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.499 [2024-07-24 14:28:52.744701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.499 [2024-07-24 14:28:52.744712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.499 [2024-07-24 14:28:52.744721] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.499 [2024-07-24 14:28:52.744771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.499 [2024-07-24 14:28:52.744827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.499 [2024-07-24 14:28:52.744831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.499 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:25.499 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:25.499 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:25.499 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.499 14:28:52 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.499 14:28:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.499 14:28:52 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:25.758 [2024-07-24 14:28:53.118730] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x181d200/0x18216b0) succeed. 00:30:26.047 [2024-07-24 14:28:53.129833] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x181e750/0x1862d40) succeed. 00:30:26.047 14:28:53 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:26.305 Malloc0 00:30:26.305 14:28:53 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.562 14:28:53 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.820 14:28:54 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:27.078 [2024-07-24 14:28:54.290645] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:27.078 14:28:54 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:27.336 [2024-07-24 14:28:54.583425] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:27.336 14:28:54 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:27.594 [2024-07-24 14:28:54.872439] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=223541 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 223541 /var/tmp/bdevperf.sock 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 223541 ']' 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:27.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:27.594 14:28:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:27.852 14:28:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:27.852 14:28:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:27.852 14:28:55 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:28.421 NVMe0n1 00:30:28.421 14:28:55 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:28.679 00:30:28.679 14:28:55 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=223673 00:30:28.679 14:28:55 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:28.679 14:28:55 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:29.613 14:28:56 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:29.871 14:28:57 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:33.157 14:29:00 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:33.157 00:30:33.157 14:29:00 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:33.415 14:29:00 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:36.698 14:29:03 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:36.698 [2024-07-24 14:29:04.013140] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:36.698 14:29:04 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:38.070 14:29:05 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:38.070 14:29:05 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 223673 00:30:44.645 0 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 223541 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 223541 ']' 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 223541 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 223541 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 223541' 00:30:44.645 killing process with pid 223541 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 223541 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 223541 00:30:44.645 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:44.645 [2024-07-24 14:28:54.931019] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:44.645 [2024-07-24 14:28:54.931126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223541 ] 00:30:44.645 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.645 [2024-07-24 14:28:55.002440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.645 [2024-07-24 14:28:55.089169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.645 Running I/O for 15 seconds... 00:30:44.645 [2024-07-24 14:28:58.133842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180900 00:30:44.645 [2024-07-24 14:28:58.133922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.645 [2024-07-24 14:28:58.133947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180900 00:30:44.645 [2024-07-24 14:28:58.133964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.645 [2024-07-24 14:28:58.133980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180900 00:30:44.645 [2024-07-24 14:28:58.133996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.645 [2024-07-24 14:28:58.134012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180900 00:30:44.646 [2024-07-24 14:28:58.134835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.134865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.134895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.134925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.134959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.134975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.134990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.135005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.135020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.135035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.135054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.135070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.135099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.135116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.135129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.135144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.646 [2024-07-24 14:28:58.135157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.646 [2024-07-24 14:28:58.135172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.135979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.135995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.647 [2024-07-24 14:28:58.136415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.647 [2024-07-24 14:28:58.136428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.136984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.136999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.648 [2024-07-24 14:28:58.137562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.648 [2024-07-24 14:28:58.137576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.137895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:28:58.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52965 cdw0:dbe56000 sqhd:0776 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.139878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.649 [2024-07-24 14:28:58.139901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.649 [2024-07-24 14:28:58.139914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124688 len:8 PRP1 0x0 PRP2 0x0 00:30:44.649 [2024-07-24 14:28:58.139928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.649 [2024-07-24 14:28:58.139979] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:30:44.649 [2024-07-24 14:28:58.139998] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:44.649 [2024-07-24 14:28:58.140013] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:44.649 [2024-07-24 14:28:58.143214] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:44.649 [2024-07-24 14:28:58.161986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:44.649 [2024-07-24 14:28:58.208837] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:44.649 [2024-07-24 14:29:01.731323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.731714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.649 [2024-07-24 14:29:01.731974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.731989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.732003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.732019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.732032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.732048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180900 00:30:44.649 [2024-07-24 14:29:01.732063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.649 [2024-07-24 14:29:01.732097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180900 00:30:44.650 [2024-07-24 14:29:01.732690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.650 [2024-07-24 14:29:01.732977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.650 [2024-07-24 14:29:01.732992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180900 00:30:44.651 [2024-07-24 14:29:01.733785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.733981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.733997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.734011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.734026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.734040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.734055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.734070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.734101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.734115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.651 [2024-07-24 14:29:01.734130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.651 [2024-07-24 14:29:01.734144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.734406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.734970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.734986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.735000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.735030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.735060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.735090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.735119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180900 00:30:44.652 [2024-07-24 14:29:01.735149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.735179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.735208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.652 [2024-07-24 14:29:01.735252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.652 [2024-07-24 14:29:01.735266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:01.735280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52967 cdw0:dbe56000 sqhd:c074 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:01.737146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.653 [2024-07-24 14:29:01.737189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.653 [2024-07-24 14:29:01.737204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20400 len:8 PRP1 0x0 PRP2 0x0 00:30:44.653 [2024-07-24 14:29:01.737218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:01.737267] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:30:44.653 [2024-07-24 14:29:01.737285] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:30:44.653 [2024-07-24 14:29:01.737300] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:44.653 [2024-07-24 14:29:01.740508] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:44.653 [2024-07-24 14:29:01.759020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:44.653 [2024-07-24 14:29:01.802067] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:44.653 [2024-07-24 14:29:06.273436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.273969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.273984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.273997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.274027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.274056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.274102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.274134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.274164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x180900 00:30:44.653 [2024-07-24 14:29:06.274192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.653 [2024-07-24 14:29:06.274479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.653 [2024-07-24 14:29:06.274495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.274916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.274975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.274990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.275004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.275034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.275063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.654 [2024-07-24 14:29:06.275108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.654 [2024-07-24 14:29:06.275441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180900 00:30:44.654 [2024-07-24 14:29:06.275454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.275513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.275545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.275574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.275660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.275983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.275998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:44.655 [2024-07-24 14:29:06.276401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.276429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.276458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.276487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.276516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.276545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180900 00:30:44.655 [2024-07-24 14:29:06.276573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.655 [2024-07-24 14:29:06.276588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.276968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.276986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180900 00:30:44.656 [2024-07-24 14:29:06.277380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:52970 cdw0:dbe56000 sqhd:db56 p:1 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.277540] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:30:44.656 [2024-07-24 14:29:06.279311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:44.656 [2024-07-24 14:29:06.279340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:44.656 [2024-07-24 14:29:06.279370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30960 len:8 PRP1 0x0 PRP2 0x0 00:30:44.656 [2024-07-24 14:29:06.279384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:44.656 [2024-07-24 14:29:06.279435] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:30:44.656 [2024-07-24 14:29:06.279453] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:30:44.656 [2024-07-24 14:29:06.279468] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:44.656 [2024-07-24 14:29:06.282651] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:44.656 [2024-07-24 14:29:06.301496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:44.656 [2024-07-24 14:29:06.345954] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:44.656 00:30:44.656 Latency(us) 00:30:44.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.656 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:44.656 Verification LBA range: start 0x0 length 0x4000 00:30:44.656 NVMe0n1 : 15.01 11080.90 43.28 231.16 0.00 11286.30 609.85 1019060.53 00:30:44.656 =================================================================================================================== 00:30:44.656 Total : 11080.90 43.28 231.16 0.00 11286.30 609.85 1019060.53 00:30:44.656 Received shutdown signal, test time was about 15.000000 seconds 00:30:44.656 00:30:44.656 Latency(us) 00:30:44.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.656 =================================================================================================================== 00:30:44.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.656 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:44.656 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:44.656 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:44.656 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=225515 00:30:44.656 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:44.656 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 225515 /var/tmp/bdevperf.sock 00:30:44.656 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 225515 ']' 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:44.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:30:44.657 [2024-07-24 14:29:11.823541] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:30:44.657 14:29:11 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:30:44.915 [2024-07-24 14:29:12.064319] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:30:44.915 14:29:12 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.173 NVMe0n1 00:30:45.173 14:29:12 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.431 00:30:45.431 14:29:12 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.707 00:30:45.707 14:29:13 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:45.707 14:29:13 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:45.995 14:29:13 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.251 14:29:13 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:49.534 14:29:16 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:49.534 14:29:16 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:49.534 14:29:16 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=226168 00:30:49.534 14:29:16 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:49.534 14:29:16 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 226168 00:30:50.907 0 00:30:50.907 14:29:17 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:50.907 [2024-07-24 14:29:11.355728] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:50.907 [2024-07-24 14:29:11.355850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225515 ] 00:30:50.907 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.907 [2024-07-24 14:29:11.425174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.907 [2024-07-24 14:29:11.506805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.907 [2024-07-24 14:29:13.508020] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:30:50.907 [2024-07-24 14:29:13.508720] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.907 [2024-07-24 14:29:13.508784] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:50.907 [2024-07-24 14:29:13.534655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:30:50.907 [2024-07-24 14:29:13.550409] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:50.907 Running I/O for 1 seconds... 00:30:50.907 00:30:50.907 Latency(us) 00:30:50.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.907 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:50.907 Verification LBA range: start 0x0 length 0x4000 00:30:50.907 NVMe0n1 : 1.01 14082.79 55.01 0.00 0.00 9035.54 3349.62 12087.75 00:30:50.907 =================================================================================================================== 00:30:50.907 Total : 14082.79 55.01 0.00 0.00 9035.54 3349.62 12087.75 00:30:50.907 14:29:17 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.907 14:29:17 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:50.907 14:29:18 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.165 14:29:18 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:51.165 14:29:18 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:51.422 14:29:18 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.681 14:29:19 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 225515 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 225515 ']' 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 225515 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 225515 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 225515' 00:30:54.967 killing process with pid 225515 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 225515 00:30:54.967 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 225515 00:30:55.225 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:55.225 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:55.483 rmmod nvme_rdma 00:30:55.483 rmmod nvme_fabrics 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 223251 ']' 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 223251 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 223251 ']' 00:30:55.483 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 223251 00:30:55.484 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:55.484 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:55.484 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 223251 00:30:55.742 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:55.742 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:55.742 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 223251' 00:30:55.742 killing process with pid 223251 00:30:55.742 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@965 -- # kill 223251 00:30:55.742 14:29:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@970 -- # wait 223251 00:30:56.000 14:29:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:56.000 14:29:23 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:56.000 00:30:56.000 real 0m33.268s 00:30:56.000 user 2m5.079s 00:30:56.000 sys 0m4.098s 00:30:56.000 14:29:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:56.000 14:29:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:56.000 ************************************ 00:30:56.000 END TEST nvmf_failover 00:30:56.000 ************************************ 00:30:56.000 14:29:23 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:56.000 14:29:23 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:56.000 14:29:23 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:56.000 14:29:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:56.000 ************************************ 00:30:56.000 START TEST nvmf_host_discovery 00:30:56.000 ************************************ 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:30:56.000 * Looking for test storage... 00:30:56.000 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.000 14:29:23 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:30:56.001 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:30:56.001 00:30:56.001 real 0m0.063s 00:30:56.001 user 0m0.020s 00:30:56.001 sys 0m0.048s 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.001 ************************************ 00:30:56.001 END TEST nvmf_host_discovery 00:30:56.001 ************************************ 00:30:56.001 14:29:23 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:56.001 14:29:23 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:56.001 14:29:23 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:56.001 14:29:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:56.001 ************************************ 00:30:56.001 START TEST nvmf_host_multipath_status 00:30:56.001 ************************************ 00:30:56.001 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:30:56.258 * Looking for test storage... 00:30:56.258 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.258 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:56.259 14:29:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:30:58.796 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:30:58.796 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:30:58.796 Found net devices under 0000:81:00.0: mlx_0_0 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:30:58.796 Found net devices under 0000:81:00.1: mlx_0_1 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:58.796 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:58.797 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:58.797 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:30:58.797 altname enp129s0f0np0 00:30:58.797 inet 192.168.100.8/24 scope global mlx_0_0 00:30:58.797 valid_lft forever preferred_lft forever 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:58.797 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:58.797 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:30:58.797 altname enp129s0f1np1 00:30:58.797 inet 192.168.100.9/24 scope global mlx_0_1 00:30:58.797 valid_lft forever preferred_lft forever 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:58.797 192.168.100.9' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:58.797 192.168.100.9' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:58.797 192.168.100.9' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=228914 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 228914 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 228914 ']' 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:58.797 14:29:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.797 [2024-07-24 14:29:25.835889] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:30:58.797 [2024-07-24 14:29:25.835970] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.797 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.797 [2024-07-24 14:29:25.906176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:58.797 [2024-07-24 14:29:25.995195] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.797 [2024-07-24 14:29:25.995261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.797 [2024-07-24 14:29:25.995290] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.797 [2024-07-24 14:29:25.995302] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.797 [2024-07-24 14:29:25.995312] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.797 [2024-07-24 14:29:25.998817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.797 [2024-07-24 14:29:25.998829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=228914 00:30:58.797 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:30:59.365 [2024-07-24 14:29:26.439214] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xace400/0xad28b0) succeed. 00:30:59.365 [2024-07-24 14:29:26.451124] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xacf8b0/0xb13f40) succeed. 00:30:59.365 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:59.623 Malloc0 00:30:59.623 14:29:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:59.881 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:00.139 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:00.397 [2024-07-24 14:29:27.612703] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:00.397 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:31:00.655 [2024-07-24 14:29:27.849213] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=229120 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 229120 /var/tmp/bdevperf.sock 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 229120 ']' 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:00.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:00.655 14:29:27 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:00.914 14:29:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:00.914 14:29:28 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:00.914 14:29:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:01.172 14:29:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:01.429 Nvme0n1 00:31:01.429 14:29:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:01.687 Nvme0n1 00:31:01.687 14:29:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:01.687 14:29:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:04.254 14:29:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:04.254 14:29:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:31:04.254 14:29:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:04.254 14:29:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.632 14:29:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.890 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.890 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.890 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.890 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:06.148 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.148 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:06.148 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.148 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:06.407 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.407 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:06.407 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.407 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:06.665 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.665 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:06.665 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.665 14:29:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.923 14:29:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.923 14:29:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:06.923 14:29:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:07.181 14:29:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:07.439 14:29:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:08.377 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:08.377 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:08.377 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.377 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:08.635 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:08.635 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:08.635 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.635 14:29:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:08.893 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.893 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:08.893 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.893 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:09.152 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.152 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:09.152 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.152 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:09.410 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.410 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:09.410 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.410 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:09.668 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.668 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:09.668 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.668 14:29:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:09.926 14:29:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.926 14:29:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:09.926 14:29:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:10.184 14:29:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:31:10.442 14:29:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:11.376 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:11.376 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:11.376 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.376 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.634 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.634 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:11.634 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.634 14:29:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:11.892 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.892 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:11.892 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.892 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:12.150 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.150 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:12.150 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.150 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:12.408 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.408 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:12.408 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.408 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.667 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.667 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:12.667 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.667 14:29:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:12.925 14:29:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.925 14:29:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:12.925 14:29:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:13.184 14:29:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:13.444 14:29:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:14.382 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:14.382 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:14.382 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.382 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.640 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.640 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:14.640 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.640 14:29:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.898 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.898 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.898 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.898 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:15.156 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.156 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:15.156 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.156 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:15.414 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.414 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:15.414 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.414 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.672 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.672 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:15.672 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.672 14:29:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.930 14:29:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.930 14:29:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:15.930 14:29:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:31:16.189 14:29:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:16.447 14:29:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:17.380 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:17.380 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:17.380 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.380 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:17.667 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.667 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:17.667 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.667 14:29:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.925 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.925 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.925 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.925 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:18.183 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.183 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:18.183 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.183 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:18.441 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.441 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:18.441 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.441 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.699 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.699 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:18.699 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.699 14:29:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.957 14:29:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.957 14:29:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:18.957 14:29:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:31:19.215 14:29:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:19.474 14:29:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:20.409 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:20.409 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:20.409 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.409 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:20.667 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:20.667 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:20.667 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.667 14:29:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:20.925 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.925 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:20.925 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.925 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:21.183 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.183 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:21.183 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.183 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:21.442 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.442 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:21.442 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.442 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:21.700 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.700 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:21.700 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.700 14:29:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:21.958 14:29:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.958 14:29:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:22.219 14:29:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:22.219 14:29:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:31:22.478 14:29:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:22.736 14:29:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:23.671 14:29:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:23.671 14:29:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:23.671 14:29:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.671 14:29:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:23.929 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.929 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:23.929 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.929 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:24.187 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.187 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:24.187 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.187 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:24.445 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.445 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:24.445 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.445 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:24.704 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.704 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:24.704 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.704 14:29:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:24.962 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.962 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:24.962 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.962 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:25.220 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.220 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:25.220 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:25.479 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:31:25.737 14:29:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:26.674 14:29:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:26.674 14:29:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:26.674 14:29:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.674 14:29:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:26.932 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.932 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:26.932 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.932 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:27.189 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.189 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:27.189 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.189 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:27.447 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.447 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:27.447 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.447 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:27.705 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.705 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:27.705 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.705 14:29:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:27.963 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.963 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:27.963 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.963 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:28.224 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.224 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:28.224 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:28.484 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:31:28.484 14:29:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:29.863 14:29:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:29.863 14:29:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:29.863 14:29:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.863 14:29:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:29.863 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.863 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:29.863 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.863 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:30.121 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.121 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:30.121 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.121 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:30.379 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.379 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:30.379 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.379 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:30.636 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.636 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:30.636 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.636 14:29:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:30.921 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.921 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:30.921 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.922 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:31.186 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.186 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:31.186 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:31:31.443 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:31:31.702 14:29:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:32.640 14:29:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:32.641 14:29:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:32.641 14:29:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.641 14:29:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:32.899 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.899 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:32.899 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.899 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:33.157 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.157 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.157 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.157 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:33.416 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.416 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:33.416 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.416 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:33.675 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.675 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:33.675 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.675 14:30:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:33.932 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.932 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:33.932 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.932 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 229120 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 229120 ']' 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 229120 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 229120 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 229120' 00:31:34.190 killing process with pid 229120 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 229120 00:31:34.190 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 229120 00:31:34.454 Connection closed with partial response: 00:31:34.454 00:31:34.454 00:31:34.454 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 229120 00:31:34.455 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:34.455 [2024-07-24 14:29:27.902327] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:31:34.455 [2024-07-24 14:29:27.902403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229120 ] 00:31:34.455 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.455 [2024-07-24 14:29:27.970119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.455 [2024-07-24 14:29:28.059749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:34.455 Running I/O for 90 seconds... 00:31:34.455 [2024-07-24 14:29:43.368268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.368931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.368962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.368978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.368993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.455 [2024-07-24 14:29:43.369270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.369300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.369330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.369360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.369396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.369427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.455 [2024-07-24 14:29:43.369443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183400 00:31:34.455 [2024-07-24 14:29:43.369461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.456 [2024-07-24 14:29:43.369612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.456 [2024-07-24 14:29:43.369642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.369982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.369999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:34.456 [2024-07-24 14:29:43.370611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183400 00:31:34.456 [2024-07-24 14:29:43.370626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.370973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.370989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.371974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.371994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.372037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.372077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.372131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.372172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.372210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.457 [2024-07-24 14:29:43.372249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183400 00:31:34.457 [2024-07-24 14:29:43.372608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:34.457 [2024-07-24 14:29:43.372633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.372963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.372989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:43.373446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:43.373461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.842526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.842590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.842636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.842653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.842672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.842687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.842704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.458 [2024-07-24 14:29:58.842719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.842736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.458 [2024-07-24 14:29:58.842751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.842768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.458 [2024-07-24 14:29:58.842820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.458 [2024-07-24 14:29:58.843340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.843380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.458 [2024-07-24 14:29:58.843412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.843444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.843487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.843519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.458 [2024-07-24 14:29:58.843550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.843583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.843614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.458 [2024-07-24 14:29:58.843645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:34.458 [2024-07-24 14:29:58.843662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183400 00:31:34.458 [2024-07-24 14:29:58.843677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.843708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.843741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.843772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.843842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.843875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.843912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.843944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.843976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.843993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183400 00:31:34.459 [2024-07-24 14:29:58.844630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.459 [2024-07-24 14:29:58.844752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:34.459 [2024-07-24 14:29:58.844771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.844807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.844826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.844842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.844860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.844876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.844893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.844909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.844926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.844942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.844959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.844974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.844992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.845008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.845189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.845255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.845289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.845397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.845447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.847529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.847554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.847578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.847595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.847620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.847636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.847902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.847926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.847949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.847965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.847983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.847998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.848097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.848197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.848301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.460 [2024-07-24 14:29:58.848333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:34.460 [2024-07-24 14:29:58.848383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183400 00:31:34.460 [2024-07-24 14:29:58.848398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.848430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.848559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.848694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.848727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.848801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.848868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.848885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.848901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183400 00:31:34.461 [2024-07-24 14:29:58.849744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:34.461 [2024-07-24 14:29:58.849762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.461 [2024-07-24 14:29:58.849777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.849804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.849821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.849839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.849854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.849873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.849888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.849906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.849921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.849938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.849953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.849971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.849987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.850008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.850024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.850042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.850058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.850075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.850097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.852300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.852707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.852831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.852897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.852962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.852979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.852994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.853026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.853059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.853091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.853123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.853155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.853191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.853225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.853258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.853292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.853324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.462 [2024-07-24 14:29:58.853356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.853388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183400 00:31:34.462 [2024-07-24 14:29:58.853421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:34.462 [2024-07-24 14:29:58.853438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.853453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.853486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.853518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.853551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.853589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.853763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.853808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.853853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.853885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.853918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.853951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.853968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.853983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183400 00:31:34.463 [2024-07-24 14:29:58.854622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:34.463 [2024-07-24 14:29:58.854738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.463 [2024-07-24 14:29:58.854753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:34.464 [2024-07-24 14:29:58.854771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183400 00:31:34.464 [2024-07-24 14:29:58.854786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:34.464 [2024-07-24 14:29:58.854813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:34.464 [2024-07-24 14:29:58.854832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:34.464 Received shutdown signal, test time was about 32.233691 seconds 00:31:34.464 00:31:34.464 Latency(us) 00:31:34.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.464 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:34.464 Verification LBA range: start 0x0 length 0x4000 00:31:34.464 Nvme0n1 : 32.23 12475.26 48.73 0.00 0.00 10236.50 116.05 4026531.84 00:31:34.464 =================================================================================================================== 00:31:34.464 Total : 12475.26 48.73 0.00 0.00 10236.50 116.05 4026531.84 00:31:34.464 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:34.723 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:34.724 14:30:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:34.724 rmmod nvme_rdma 00:31:34.724 rmmod nvme_fabrics 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 228914 ']' 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 228914 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 228914 ']' 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 228914 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 228914 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 228914' 00:31:34.724 killing process with pid 228914 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 228914 00:31:34.724 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 228914 00:31:35.292 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:35.292 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:35.292 00:31:35.292 real 0m38.996s 00:31:35.292 user 2m7.528s 00:31:35.292 sys 0m6.062s 00:31:35.292 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:35.292 14:30:02 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.292 ************************************ 00:31:35.292 END TEST nvmf_host_multipath_status 00:31:35.292 ************************************ 00:31:35.292 14:30:02 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:35.292 14:30:02 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:35.292 14:30:02 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:35.292 14:30:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:35.292 ************************************ 00:31:35.292 START TEST nvmf_discovery_remove_ifc 00:31:35.292 ************************************ 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:31:35.292 * Looking for test storage... 00:31:35.292 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.292 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:31:35.293 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:31:35.293 00:31:35.293 real 0m0.064s 00:31:35.293 user 0m0.030s 00:31:35.293 sys 0m0.039s 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:35.293 14:30:02 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:35.293 ************************************ 00:31:35.293 END TEST nvmf_discovery_remove_ifc 00:31:35.293 ************************************ 00:31:35.293 14:30:02 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:35.293 14:30:02 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:35.293 14:30:02 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:35.293 14:30:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:35.293 ************************************ 00:31:35.293 START TEST nvmf_identify_kernel_target 00:31:35.293 ************************************ 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:31:35.293 * Looking for test storage... 00:31:35.293 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:35.293 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.294 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:35.294 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:35.294 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:35.294 14:30:02 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:31:37.825 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:37.825 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:31:37.826 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:31:37.826 Found net devices under 0000:81:00.0: mlx_0_0 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:31:37.826 Found net devices under 0000:81:00.1: mlx_0_1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:37.826 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:37.826 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:31:37.826 altname enp129s0f0np0 00:31:37.826 inet 192.168.100.8/24 scope global mlx_0_0 00:31:37.826 valid_lft forever preferred_lft forever 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:37.826 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:37.826 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:31:37.826 altname enp129s0f1np1 00:31:37.826 inet 192.168.100.9/24 scope global mlx_0_1 00:31:37.826 valid_lft forever preferred_lft forever 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:37.826 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:37.827 192.168.100.9' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:37.827 192.168.100.9' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:37.827 192.168.100.9' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:37.827 14:30:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:31:38.760 Waiting for block devices as requested 00:31:39.018 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:31:39.018 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:39.274 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:39.274 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:39.274 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:39.531 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:39.531 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:39.531 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:39.531 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:39.531 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:39.789 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:39.789 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:39.789 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:39.789 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:40.046 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:40.046 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:40.046 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:40.306 No valid GPT data, bailing 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:40.306 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -t rdma -s 4420 00:31:40.566 00:31:40.566 Discovery Log Number of Records 2, Generation counter 2 00:31:40.566 =====Discovery Log Entry 0====== 00:31:40.566 trtype: rdma 00:31:40.566 adrfam: ipv4 00:31:40.566 subtype: current discovery subsystem 00:31:40.566 treq: not specified, sq flow control disable supported 00:31:40.566 portid: 1 00:31:40.566 trsvcid: 4420 00:31:40.566 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:40.566 traddr: 192.168.100.8 00:31:40.566 eflags: none 00:31:40.566 rdma_prtype: not specified 00:31:40.566 rdma_qptype: connected 00:31:40.566 rdma_cms: rdma-cm 00:31:40.566 rdma_pkey: 0x0000 00:31:40.566 =====Discovery Log Entry 1====== 00:31:40.566 trtype: rdma 00:31:40.566 adrfam: ipv4 00:31:40.566 subtype: nvme subsystem 00:31:40.566 treq: not specified, sq flow control disable supported 00:31:40.566 portid: 1 00:31:40.566 trsvcid: 4420 00:31:40.566 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:40.566 traddr: 192.168.100.8 00:31:40.566 eflags: none 00:31:40.566 rdma_prtype: not specified 00:31:40.566 rdma_qptype: connected 00:31:40.566 rdma_cms: rdma-cm 00:31:40.566 rdma_pkey: 0x0000 00:31:40.566 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:31:40.566 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:40.566 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.567 ===================================================== 00:31:40.567 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:40.567 ===================================================== 00:31:40.567 Controller Capabilities/Features 00:31:40.567 ================================ 00:31:40.567 Vendor ID: 0000 00:31:40.567 Subsystem Vendor ID: 0000 00:31:40.567 Serial Number: 3ffcfbacb0e151339a3f 00:31:40.567 Model Number: Linux 00:31:40.567 Firmware Version: 6.7.0-68 00:31:40.567 Recommended Arb Burst: 0 00:31:40.567 IEEE OUI Identifier: 00 00 00 00:31:40.567 Multi-path I/O 00:31:40.567 May have multiple subsystem ports: No 00:31:40.567 May have multiple controllers: No 00:31:40.567 Associated with SR-IOV VF: No 00:31:40.567 Max Data Transfer Size: Unlimited 00:31:40.567 Max Number of Namespaces: 0 00:31:40.567 Max Number of I/O Queues: 1024 00:31:40.567 NVMe Specification Version (VS): 1.3 00:31:40.567 NVMe Specification Version (Identify): 1.3 00:31:40.567 Maximum Queue Entries: 128 00:31:40.567 Contiguous Queues Required: No 00:31:40.567 Arbitration Mechanisms Supported 00:31:40.567 Weighted Round Robin: Not Supported 00:31:40.567 Vendor Specific: Not Supported 00:31:40.567 Reset Timeout: 7500 ms 00:31:40.567 Doorbell Stride: 4 bytes 00:31:40.567 NVM Subsystem Reset: Not Supported 00:31:40.567 Command Sets Supported 00:31:40.567 NVM Command Set: Supported 00:31:40.567 Boot Partition: Not Supported 00:31:40.567 Memory Page Size Minimum: 4096 bytes 00:31:40.567 Memory Page Size Maximum: 4096 bytes 00:31:40.567 Persistent Memory Region: Not Supported 00:31:40.567 Optional Asynchronous Events Supported 00:31:40.567 Namespace Attribute Notices: Not Supported 00:31:40.567 Firmware Activation Notices: Not Supported 00:31:40.567 ANA Change Notices: Not Supported 00:31:40.567 PLE Aggregate Log Change Notices: Not Supported 00:31:40.567 LBA Status Info Alert Notices: Not Supported 00:31:40.567 EGE Aggregate Log Change Notices: Not Supported 00:31:40.567 Normal NVM Subsystem Shutdown event: Not Supported 00:31:40.567 Zone Descriptor Change Notices: Not Supported 00:31:40.567 Discovery Log Change Notices: Supported 00:31:40.567 Controller Attributes 00:31:40.567 128-bit Host Identifier: Not Supported 00:31:40.567 Non-Operational Permissive Mode: Not Supported 00:31:40.567 NVM Sets: Not Supported 00:31:40.567 Read Recovery Levels: Not Supported 00:31:40.567 Endurance Groups: Not Supported 00:31:40.567 Predictable Latency Mode: Not Supported 00:31:40.567 Traffic Based Keep ALive: Not Supported 00:31:40.567 Namespace Granularity: Not Supported 00:31:40.567 SQ Associations: Not Supported 00:31:40.567 UUID List: Not Supported 00:31:40.567 Multi-Domain Subsystem: Not Supported 00:31:40.567 Fixed Capacity Management: Not Supported 00:31:40.567 Variable Capacity Management: Not Supported 00:31:40.567 Delete Endurance Group: Not Supported 00:31:40.567 Delete NVM Set: Not Supported 00:31:40.567 Extended LBA Formats Supported: Not Supported 00:31:40.567 Flexible Data Placement Supported: Not Supported 00:31:40.567 00:31:40.567 Controller Memory Buffer Support 00:31:40.567 ================================ 00:31:40.567 Supported: No 00:31:40.567 00:31:40.567 Persistent Memory Region Support 00:31:40.567 ================================ 00:31:40.567 Supported: No 00:31:40.567 00:31:40.567 Admin Command Set Attributes 00:31:40.567 ============================ 00:31:40.567 Security Send/Receive: Not Supported 00:31:40.567 Format NVM: Not Supported 00:31:40.567 Firmware Activate/Download: Not Supported 00:31:40.567 Namespace Management: Not Supported 00:31:40.567 Device Self-Test: Not Supported 00:31:40.567 Directives: Not Supported 00:31:40.567 NVMe-MI: Not Supported 00:31:40.567 Virtualization Management: Not Supported 00:31:40.567 Doorbell Buffer Config: Not Supported 00:31:40.567 Get LBA Status Capability: Not Supported 00:31:40.567 Command & Feature Lockdown Capability: Not Supported 00:31:40.567 Abort Command Limit: 1 00:31:40.567 Async Event Request Limit: 1 00:31:40.567 Number of Firmware Slots: N/A 00:31:40.567 Firmware Slot 1 Read-Only: N/A 00:31:40.567 Firmware Activation Without Reset: N/A 00:31:40.567 Multiple Update Detection Support: N/A 00:31:40.567 Firmware Update Granularity: No Information Provided 00:31:40.567 Per-Namespace SMART Log: No 00:31:40.567 Asymmetric Namespace Access Log Page: Not Supported 00:31:40.567 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:40.567 Command Effects Log Page: Not Supported 00:31:40.567 Get Log Page Extended Data: Supported 00:31:40.567 Telemetry Log Pages: Not Supported 00:31:40.567 Persistent Event Log Pages: Not Supported 00:31:40.567 Supported Log Pages Log Page: May Support 00:31:40.567 Commands Supported & Effects Log Page: Not Supported 00:31:40.567 Feature Identifiers & Effects Log Page:May Support 00:31:40.567 NVMe-MI Commands & Effects Log Page: May Support 00:31:40.567 Data Area 4 for Telemetry Log: Not Supported 00:31:40.567 Error Log Page Entries Supported: 1 00:31:40.567 Keep Alive: Not Supported 00:31:40.567 00:31:40.567 NVM Command Set Attributes 00:31:40.567 ========================== 00:31:40.567 Submission Queue Entry Size 00:31:40.567 Max: 1 00:31:40.567 Min: 1 00:31:40.567 Completion Queue Entry Size 00:31:40.567 Max: 1 00:31:40.567 Min: 1 00:31:40.567 Number of Namespaces: 0 00:31:40.567 Compare Command: Not Supported 00:31:40.567 Write Uncorrectable Command: Not Supported 00:31:40.567 Dataset Management Command: Not Supported 00:31:40.567 Write Zeroes Command: Not Supported 00:31:40.567 Set Features Save Field: Not Supported 00:31:40.567 Reservations: Not Supported 00:31:40.567 Timestamp: Not Supported 00:31:40.567 Copy: Not Supported 00:31:40.567 Volatile Write Cache: Not Present 00:31:40.567 Atomic Write Unit (Normal): 1 00:31:40.567 Atomic Write Unit (PFail): 1 00:31:40.567 Atomic Compare & Write Unit: 1 00:31:40.567 Fused Compare & Write: Not Supported 00:31:40.567 Scatter-Gather List 00:31:40.567 SGL Command Set: Supported 00:31:40.567 SGL Keyed: Supported 00:31:40.567 SGL Bit Bucket Descriptor: Not Supported 00:31:40.567 SGL Metadata Pointer: Not Supported 00:31:40.567 Oversized SGL: Not Supported 00:31:40.567 SGL Metadata Address: Not Supported 00:31:40.567 SGL Offset: Supported 00:31:40.567 Transport SGL Data Block: Not Supported 00:31:40.567 Replay Protected Memory Block: Not Supported 00:31:40.567 00:31:40.567 Firmware Slot Information 00:31:40.567 ========================= 00:31:40.567 Active slot: 0 00:31:40.567 00:31:40.567 00:31:40.567 Error Log 00:31:40.567 ========= 00:31:40.567 00:31:40.567 Active Namespaces 00:31:40.567 ================= 00:31:40.567 Discovery Log Page 00:31:40.567 ================== 00:31:40.567 Generation Counter: 2 00:31:40.567 Number of Records: 2 00:31:40.567 Record Format: 0 00:31:40.567 00:31:40.567 Discovery Log Entry 0 00:31:40.567 ---------------------- 00:31:40.567 Transport Type: 1 (RDMA) 00:31:40.567 Address Family: 1 (IPv4) 00:31:40.567 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:40.567 Entry Flags: 00:31:40.567 Duplicate Returned Information: 0 00:31:40.567 Explicit Persistent Connection Support for Discovery: 0 00:31:40.567 Transport Requirements: 00:31:40.567 Secure Channel: Not Specified 00:31:40.567 Port ID: 1 (0x0001) 00:31:40.567 Controller ID: 65535 (0xffff) 00:31:40.567 Admin Max SQ Size: 32 00:31:40.567 Transport Service Identifier: 4420 00:31:40.567 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:40.567 Transport Address: 192.168.100.8 00:31:40.567 Transport Specific Address Subtype - RDMA 00:31:40.567 RDMA QP Service Type: 1 (Reliable Connected) 00:31:40.567 RDMA Provider Type: 1 (No provider specified) 00:31:40.567 RDMA CM Service: 1 (RDMA_CM) 00:31:40.567 Discovery Log Entry 1 00:31:40.567 ---------------------- 00:31:40.567 Transport Type: 1 (RDMA) 00:31:40.567 Address Family: 1 (IPv4) 00:31:40.567 Subsystem Type: 2 (NVM Subsystem) 00:31:40.567 Entry Flags: 00:31:40.567 Duplicate Returned Information: 0 00:31:40.567 Explicit Persistent Connection Support for Discovery: 0 00:31:40.567 Transport Requirements: 00:31:40.567 Secure Channel: Not Specified 00:31:40.567 Port ID: 1 (0x0001) 00:31:40.567 Controller ID: 65535 (0xffff) 00:31:40.567 Admin Max SQ Size: 32 00:31:40.567 Transport Service Identifier: 4420 00:31:40.567 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:40.567 Transport Address: 192.168.100.8 00:31:40.567 Transport Specific Address Subtype - RDMA 00:31:40.568 RDMA QP Service Type: 1 (Reliable Connected) 00:31:40.568 RDMA Provider Type: 1 (No provider specified) 00:31:40.568 RDMA CM Service: 1 (RDMA_CM) 00:31:40.568 14:30:07 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:40.568 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.827 get_feature(0x01) failed 00:31:40.827 get_feature(0x02) failed 00:31:40.827 get_feature(0x04) failed 00:31:40.827 ===================================================== 00:31:40.827 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:31:40.827 ===================================================== 00:31:40.827 Controller Capabilities/Features 00:31:40.827 ================================ 00:31:40.827 Vendor ID: 0000 00:31:40.827 Subsystem Vendor ID: 0000 00:31:40.827 Serial Number: b327eb5c7fbbf31dcc6b 00:31:40.827 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:40.827 Firmware Version: 6.7.0-68 00:31:40.827 Recommended Arb Burst: 6 00:31:40.827 IEEE OUI Identifier: 00 00 00 00:31:40.827 Multi-path I/O 00:31:40.827 May have multiple subsystem ports: Yes 00:31:40.827 May have multiple controllers: Yes 00:31:40.827 Associated with SR-IOV VF: No 00:31:40.827 Max Data Transfer Size: 1048576 00:31:40.827 Max Number of Namespaces: 1024 00:31:40.827 Max Number of I/O Queues: 128 00:31:40.827 NVMe Specification Version (VS): 1.3 00:31:40.827 NVMe Specification Version (Identify): 1.3 00:31:40.827 Maximum Queue Entries: 128 00:31:40.827 Contiguous Queues Required: No 00:31:40.827 Arbitration Mechanisms Supported 00:31:40.827 Weighted Round Robin: Not Supported 00:31:40.827 Vendor Specific: Not Supported 00:31:40.827 Reset Timeout: 7500 ms 00:31:40.827 Doorbell Stride: 4 bytes 00:31:40.827 NVM Subsystem Reset: Not Supported 00:31:40.827 Command Sets Supported 00:31:40.827 NVM Command Set: Supported 00:31:40.827 Boot Partition: Not Supported 00:31:40.827 Memory Page Size Minimum: 4096 bytes 00:31:40.827 Memory Page Size Maximum: 4096 bytes 00:31:40.827 Persistent Memory Region: Not Supported 00:31:40.827 Optional Asynchronous Events Supported 00:31:40.827 Namespace Attribute Notices: Supported 00:31:40.827 Firmware Activation Notices: Not Supported 00:31:40.827 ANA Change Notices: Supported 00:31:40.827 PLE Aggregate Log Change Notices: Not Supported 00:31:40.827 LBA Status Info Alert Notices: Not Supported 00:31:40.827 EGE Aggregate Log Change Notices: Not Supported 00:31:40.827 Normal NVM Subsystem Shutdown event: Not Supported 00:31:40.827 Zone Descriptor Change Notices: Not Supported 00:31:40.827 Discovery Log Change Notices: Not Supported 00:31:40.827 Controller Attributes 00:31:40.827 128-bit Host Identifier: Supported 00:31:40.827 Non-Operational Permissive Mode: Not Supported 00:31:40.827 NVM Sets: Not Supported 00:31:40.827 Read Recovery Levels: Not Supported 00:31:40.827 Endurance Groups: Not Supported 00:31:40.827 Predictable Latency Mode: Not Supported 00:31:40.827 Traffic Based Keep ALive: Supported 00:31:40.827 Namespace Granularity: Not Supported 00:31:40.827 SQ Associations: Not Supported 00:31:40.827 UUID List: Not Supported 00:31:40.827 Multi-Domain Subsystem: Not Supported 00:31:40.827 Fixed Capacity Management: Not Supported 00:31:40.827 Variable Capacity Management: Not Supported 00:31:40.827 Delete Endurance Group: Not Supported 00:31:40.827 Delete NVM Set: Not Supported 00:31:40.827 Extended LBA Formats Supported: Not Supported 00:31:40.827 Flexible Data Placement Supported: Not Supported 00:31:40.827 00:31:40.827 Controller Memory Buffer Support 00:31:40.827 ================================ 00:31:40.827 Supported: No 00:31:40.827 00:31:40.827 Persistent Memory Region Support 00:31:40.827 ================================ 00:31:40.827 Supported: No 00:31:40.827 00:31:40.827 Admin Command Set Attributes 00:31:40.827 ============================ 00:31:40.827 Security Send/Receive: Not Supported 00:31:40.827 Format NVM: Not Supported 00:31:40.827 Firmware Activate/Download: Not Supported 00:31:40.827 Namespace Management: Not Supported 00:31:40.827 Device Self-Test: Not Supported 00:31:40.827 Directives: Not Supported 00:31:40.827 NVMe-MI: Not Supported 00:31:40.827 Virtualization Management: Not Supported 00:31:40.827 Doorbell Buffer Config: Not Supported 00:31:40.827 Get LBA Status Capability: Not Supported 00:31:40.827 Command & Feature Lockdown Capability: Not Supported 00:31:40.827 Abort Command Limit: 4 00:31:40.827 Async Event Request Limit: 4 00:31:40.827 Number of Firmware Slots: N/A 00:31:40.827 Firmware Slot 1 Read-Only: N/A 00:31:40.827 Firmware Activation Without Reset: N/A 00:31:40.827 Multiple Update Detection Support: N/A 00:31:40.827 Firmware Update Granularity: No Information Provided 00:31:40.827 Per-Namespace SMART Log: Yes 00:31:40.827 Asymmetric Namespace Access Log Page: Supported 00:31:40.827 ANA Transition Time : 10 sec 00:31:40.827 00:31:40.827 Asymmetric Namespace Access Capabilities 00:31:40.827 ANA Optimized State : Supported 00:31:40.827 ANA Non-Optimized State : Supported 00:31:40.827 ANA Inaccessible State : Supported 00:31:40.827 ANA Persistent Loss State : Supported 00:31:40.827 ANA Change State : Supported 00:31:40.827 ANAGRPID is not changed : No 00:31:40.827 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:40.827 00:31:40.827 ANA Group Identifier Maximum : 128 00:31:40.827 Number of ANA Group Identifiers : 128 00:31:40.827 Max Number of Allowed Namespaces : 1024 00:31:40.827 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:40.827 Command Effects Log Page: Supported 00:31:40.827 Get Log Page Extended Data: Supported 00:31:40.827 Telemetry Log Pages: Not Supported 00:31:40.827 Persistent Event Log Pages: Not Supported 00:31:40.827 Supported Log Pages Log Page: May Support 00:31:40.827 Commands Supported & Effects Log Page: Not Supported 00:31:40.827 Feature Identifiers & Effects Log Page:May Support 00:31:40.827 NVMe-MI Commands & Effects Log Page: May Support 00:31:40.827 Data Area 4 for Telemetry Log: Not Supported 00:31:40.827 Error Log Page Entries Supported: 128 00:31:40.827 Keep Alive: Supported 00:31:40.827 Keep Alive Granularity: 1000 ms 00:31:40.827 00:31:40.827 NVM Command Set Attributes 00:31:40.827 ========================== 00:31:40.827 Submission Queue Entry Size 00:31:40.827 Max: 64 00:31:40.827 Min: 64 00:31:40.828 Completion Queue Entry Size 00:31:40.828 Max: 16 00:31:40.828 Min: 16 00:31:40.828 Number of Namespaces: 1024 00:31:40.828 Compare Command: Not Supported 00:31:40.828 Write Uncorrectable Command: Not Supported 00:31:40.828 Dataset Management Command: Supported 00:31:40.828 Write Zeroes Command: Supported 00:31:40.828 Set Features Save Field: Not Supported 00:31:40.828 Reservations: Not Supported 00:31:40.828 Timestamp: Not Supported 00:31:40.828 Copy: Not Supported 00:31:40.828 Volatile Write Cache: Present 00:31:40.828 Atomic Write Unit (Normal): 1 00:31:40.828 Atomic Write Unit (PFail): 1 00:31:40.828 Atomic Compare & Write Unit: 1 00:31:40.828 Fused Compare & Write: Not Supported 00:31:40.828 Scatter-Gather List 00:31:40.828 SGL Command Set: Supported 00:31:40.828 SGL Keyed: Supported 00:31:40.828 SGL Bit Bucket Descriptor: Not Supported 00:31:40.828 SGL Metadata Pointer: Not Supported 00:31:40.828 Oversized SGL: Not Supported 00:31:40.828 SGL Metadata Address: Not Supported 00:31:40.828 SGL Offset: Supported 00:31:40.828 Transport SGL Data Block: Not Supported 00:31:40.828 Replay Protected Memory Block: Not Supported 00:31:40.828 00:31:40.828 Firmware Slot Information 00:31:40.828 ========================= 00:31:40.828 Active slot: 0 00:31:40.828 00:31:40.828 Asymmetric Namespace Access 00:31:40.828 =========================== 00:31:40.828 Change Count : 0 00:31:40.828 Number of ANA Group Descriptors : 1 00:31:40.828 ANA Group Descriptor : 0 00:31:40.828 ANA Group ID : 1 00:31:40.828 Number of NSID Values : 1 00:31:40.828 Change Count : 0 00:31:40.828 ANA State : 1 00:31:40.828 Namespace Identifier : 1 00:31:40.828 00:31:40.828 Commands Supported and Effects 00:31:40.828 ============================== 00:31:40.828 Admin Commands 00:31:40.828 -------------- 00:31:40.828 Get Log Page (02h): Supported 00:31:40.828 Identify (06h): Supported 00:31:40.828 Abort (08h): Supported 00:31:40.828 Set Features (09h): Supported 00:31:40.828 Get Features (0Ah): Supported 00:31:40.828 Asynchronous Event Request (0Ch): Supported 00:31:40.828 Keep Alive (18h): Supported 00:31:40.828 I/O Commands 00:31:40.828 ------------ 00:31:40.828 Flush (00h): Supported 00:31:40.828 Write (01h): Supported LBA-Change 00:31:40.828 Read (02h): Supported 00:31:40.828 Write Zeroes (08h): Supported LBA-Change 00:31:40.828 Dataset Management (09h): Supported 00:31:40.828 00:31:40.828 Error Log 00:31:40.828 ========= 00:31:40.828 Entry: 0 00:31:40.828 Error Count: 0x3 00:31:40.828 Submission Queue Id: 0x0 00:31:40.828 Command Id: 0x5 00:31:40.828 Phase Bit: 0 00:31:40.828 Status Code: 0x2 00:31:40.828 Status Code Type: 0x0 00:31:40.828 Do Not Retry: 1 00:31:40.828 Error Location: 0x28 00:31:40.828 LBA: 0x0 00:31:40.828 Namespace: 0x0 00:31:40.828 Vendor Log Page: 0x0 00:31:40.828 ----------- 00:31:40.828 Entry: 1 00:31:40.828 Error Count: 0x2 00:31:40.828 Submission Queue Id: 0x0 00:31:40.828 Command Id: 0x5 00:31:40.828 Phase Bit: 0 00:31:40.828 Status Code: 0x2 00:31:40.828 Status Code Type: 0x0 00:31:40.828 Do Not Retry: 1 00:31:40.828 Error Location: 0x28 00:31:40.828 LBA: 0x0 00:31:40.828 Namespace: 0x0 00:31:40.828 Vendor Log Page: 0x0 00:31:40.828 ----------- 00:31:40.828 Entry: 2 00:31:40.828 Error Count: 0x1 00:31:40.828 Submission Queue Id: 0x0 00:31:40.828 Command Id: 0x0 00:31:40.828 Phase Bit: 0 00:31:40.828 Status Code: 0x2 00:31:40.828 Status Code Type: 0x0 00:31:40.828 Do Not Retry: 1 00:31:40.828 Error Location: 0x28 00:31:40.828 LBA: 0x0 00:31:40.828 Namespace: 0x0 00:31:40.828 Vendor Log Page: 0x0 00:31:40.828 00:31:40.828 Number of Queues 00:31:40.828 ================ 00:31:40.828 Number of I/O Submission Queues: 128 00:31:40.828 Number of I/O Completion Queues: 128 00:31:40.828 00:31:40.828 ZNS Specific Controller Data 00:31:40.828 ============================ 00:31:40.828 Zone Append Size Limit: 0 00:31:40.828 00:31:40.828 00:31:40.828 Active Namespaces 00:31:40.828 ================= 00:31:40.828 get_feature(0x05) failed 00:31:40.828 Namespace ID:1 00:31:40.828 Command Set Identifier: NVM (00h) 00:31:40.828 Deallocate: Supported 00:31:40.828 Deallocated/Unwritten Error: Not Supported 00:31:40.828 Deallocated Read Value: Unknown 00:31:40.828 Deallocate in Write Zeroes: Not Supported 00:31:40.828 Deallocated Guard Field: 0xFFFF 00:31:40.828 Flush: Supported 00:31:40.828 Reservation: Not Supported 00:31:40.828 Namespace Sharing Capabilities: Multiple Controllers 00:31:40.828 Size (in LBAs): 1953525168 (931GiB) 00:31:40.828 Capacity (in LBAs): 1953525168 (931GiB) 00:31:40.828 Utilization (in LBAs): 1953525168 (931GiB) 00:31:40.828 UUID: fb91d275-fcca-4c73-b092-0e8af89973e4 00:31:40.828 Thin Provisioning: Not Supported 00:31:40.828 Per-NS Atomic Units: Yes 00:31:40.828 Atomic Boundary Size (Normal): 0 00:31:40.828 Atomic Boundary Size (PFail): 0 00:31:40.828 Atomic Boundary Offset: 0 00:31:40.828 NGUID/EUI64 Never Reused: No 00:31:40.828 ANA group ID: 1 00:31:40.828 Namespace Write Protected: No 00:31:40.828 Number of LBA Formats: 1 00:31:40.828 Current LBA Format: LBA Format #00 00:31:40.828 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:40.828 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:31:40.828 rmmod nvme_rdma 00:31:40.828 rmmod nvme_fabrics 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:31:40.828 14:30:08 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:31:42.202 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:42.202 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:42.202 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:42.202 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:42.202 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:42.202 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:42.202 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:42.202 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:42.202 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:42.202 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:42.202 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:42.202 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:42.202 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:42.202 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:42.202 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:42.461 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:43.398 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:31:43.398 00:31:43.398 real 0m8.104s 00:31:43.398 user 0m2.338s 00:31:43.398 sys 0m3.798s 00:31:43.398 14:30:10 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:43.398 14:30:10 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:43.398 ************************************ 00:31:43.398 END TEST nvmf_identify_kernel_target 00:31:43.398 ************************************ 00:31:43.398 14:30:10 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:43.398 14:30:10 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:43.398 14:30:10 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:43.398 14:30:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:31:43.398 ************************************ 00:31:43.398 START TEST nvmf_auth_host 00:31:43.398 ************************************ 00:31:43.398 14:30:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:31:43.398 * Looking for test storage... 00:31:43.398 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:43.399 14:30:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:31:45.958 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:31:45.958 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:31:45.958 Found net devices under 0000:81:00.0: mlx_0_0 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:31:45.958 Found net devices under 0000:81:00.1: mlx_0_1 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:31:45.958 14:30:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:45.958 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:31:45.959 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:45.959 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:31:45.959 altname enp129s0f0np0 00:31:45.959 inet 192.168.100.8/24 scope global mlx_0_0 00:31:45.959 valid_lft forever preferred_lft forever 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:31:45.959 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:45.959 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:31:45.959 altname enp129s0f1np1 00:31:45.959 inet 192.168.100.9/24 scope global mlx_0_1 00:31:45.959 valid_lft forever preferred_lft forever 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:31:45.959 192.168.100.9' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:31:45.959 192.168.100.9' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:31:45.959 192.168.100.9' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=239920 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 239920 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 239920 ']' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:45.959 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=617514d0ff77b9634b96c8ddfe5dcb0c 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.M6Q 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 617514d0ff77b9634b96c8ddfe5dcb0c 0 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 617514d0ff77b9634b96c8ddfe5dcb0c 0 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=617514d0ff77b9634b96c8ddfe5dcb0c 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.M6Q 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.M6Q 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.M6Q 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f1f08c3db37bc940f1a7615becab5514c61869e8917c092ed0dede6a26ff59b 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.39p 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f1f08c3db37bc940f1a7615becab5514c61869e8917c092ed0dede6a26ff59b 3 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f1f08c3db37bc940f1a7615becab5514c61869e8917c092ed0dede6a26ff59b 3 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f1f08c3db37bc940f1a7615becab5514c61869e8917c092ed0dede6a26ff59b 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.39p 00:31:46.217 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.39p 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.39p 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=70052e266abbfbe37c8ff56639ec04515d69aac504edc5af 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NqQ 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 70052e266abbfbe37c8ff56639ec04515d69aac504edc5af 0 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 70052e266abbfbe37c8ff56639ec04515d69aac504edc5af 0 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=70052e266abbfbe37c8ff56639ec04515d69aac504edc5af 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NqQ 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NqQ 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NqQ 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d1ab71c228c5718212790fabcac980218288bb6bc22a77b5 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WPP 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d1ab71c228c5718212790fabcac980218288bb6bc22a77b5 2 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d1ab71c228c5718212790fabcac980218288bb6bc22a77b5 2 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d1ab71c228c5718212790fabcac980218288bb6bc22a77b5 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:46.218 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WPP 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WPP 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WPP 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc4cfaa871f376b92cea710ed5e98eeb 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:46.475 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.O9T 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc4cfaa871f376b92cea710ed5e98eeb 1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc4cfaa871f376b92cea710ed5e98eeb 1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc4cfaa871f376b92cea710ed5e98eeb 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.O9T 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.O9T 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.O9T 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a235acdf8498b75d42eea4904a37b1bd 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zLa 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a235acdf8498b75d42eea4904a37b1bd 1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a235acdf8498b75d42eea4904a37b1bd 1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a235acdf8498b75d42eea4904a37b1bd 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zLa 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zLa 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zLa 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fc29ea0b12dea214ecfbc5ebb1574c00717a3a8c2fa5a152 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.a94 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fc29ea0b12dea214ecfbc5ebb1574c00717a3a8c2fa5a152 2 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fc29ea0b12dea214ecfbc5ebb1574c00717a3a8c2fa5a152 2 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fc29ea0b12dea214ecfbc5ebb1574c00717a3a8c2fa5a152 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.a94 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.a94 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.a94 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4e280c40cb7cce3e21093aa2305b857 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.VMk 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4e280c40cb7cce3e21093aa2305b857 0 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4e280c40cb7cce3e21093aa2305b857 0 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4e280c40cb7cce3e21093aa2305b857 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.VMk 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.VMk 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.VMk 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8dc130cc88a49cccb0bb119c31debf79f2ceb284a4ece9a39760c991e58f39b8 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Blq 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8dc130cc88a49cccb0bb119c31debf79f2ceb284a4ece9a39760c991e58f39b8 3 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8dc130cc88a49cccb0bb119c31debf79f2ceb284a4ece9a39760c991e58f39b8 3 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8dc130cc88a49cccb0bb119c31debf79f2ceb284a4ece9a39760c991e58f39b8 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:46.476 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Blq 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Blq 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Blq 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 239920 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 239920 ']' 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:46.735 14:30:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.735 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:46.735 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:31:46.735 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.735 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.M6Q 00:31:46.735 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.735 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.39p ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.39p 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NqQ 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WPP ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WPP 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.O9T 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zLa ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zLa 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.a94 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.VMk ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.VMk 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Blq 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:46.994 14:30:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:31:48.369 Waiting for block devices as requested 00:31:48.370 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:31:48.370 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:48.629 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:48.629 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:48.629 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:48.888 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:48.888 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:48.888 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:48.888 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:49.147 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:49.147 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:49.147 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:49.147 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:49.407 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:49.407 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:49.407 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:49.407 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:49.974 No valid GPT data, bailing 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:49.974 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 --hostid=6b85a288-a0c4-e211-af09-001e678e7911 -a 192.168.100.8 -t rdma -s 4420 00:31:50.233 00:31:50.233 Discovery Log Number of Records 2, Generation counter 2 00:31:50.233 =====Discovery Log Entry 0====== 00:31:50.233 trtype: rdma 00:31:50.233 adrfam: ipv4 00:31:50.233 subtype: current discovery subsystem 00:31:50.233 treq: not specified, sq flow control disable supported 00:31:50.233 portid: 1 00:31:50.233 trsvcid: 4420 00:31:50.233 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:50.233 traddr: 192.168.100.8 00:31:50.233 eflags: none 00:31:50.233 rdma_prtype: not specified 00:31:50.233 rdma_qptype: connected 00:31:50.233 rdma_cms: rdma-cm 00:31:50.233 rdma_pkey: 0x0000 00:31:50.233 =====Discovery Log Entry 1====== 00:31:50.233 trtype: rdma 00:31:50.233 adrfam: ipv4 00:31:50.233 subtype: nvme subsystem 00:31:50.233 treq: not specified, sq flow control disable supported 00:31:50.233 portid: 1 00:31:50.233 trsvcid: 4420 00:31:50.233 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:50.233 traddr: 192.168.100.8 00:31:50.233 eflags: none 00:31:50.233 rdma_prtype: not specified 00:31:50.233 rdma_qptype: connected 00:31:50.233 rdma_cms: rdma-cm 00:31:50.233 rdma_pkey: 0x0000 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:50.233 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:50.234 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:50.234 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:50.234 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:50.234 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.234 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.234 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.492 nvme0n1 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.492 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.752 nvme0n1 00:31:50.752 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.752 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.752 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.752 14:30:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.752 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.752 14:30:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:50.752 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.753 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.013 nvme0n1 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.013 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.272 nvme0n1 00:31:51.272 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.272 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.272 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.272 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.272 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.272 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.531 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.532 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.791 nvme0n1 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.791 14:30:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.791 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.049 nvme0n1 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.049 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.307 nvme0n1 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:31:52.307 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.308 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.566 nvme0n1 00:31:52.566 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.566 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.566 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.566 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.566 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.566 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:52.825 14:30:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.825 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.825 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.084 nvme0n1 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:53.084 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:53.085 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:53.085 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:53.085 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.085 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.342 nvme0n1 00:31:53.342 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.342 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.343 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.601 14:30:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.860 nvme0n1 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.860 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.117 nvme0n1 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.117 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.375 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.633 nvme0n1 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:54.633 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:54.634 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:54.634 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:54.634 14:30:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:54.634 14:30:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.634 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.634 14:30:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.203 nvme0n1 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.203 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.204 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.771 nvme0n1 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.771 14:30:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.030 nvme0n1 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.030 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.595 nvme0n1 00:31:56.595 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.595 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.595 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.595 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.595 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.595 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.853 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.853 14:30:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.853 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.853 14:30:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.853 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.423 nvme0n1 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.423 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.424 14:30:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.358 nvme0n1 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.358 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.359 14:30:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.927 nvme0n1 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.927 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.528 nvme0n1 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:59.528 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.529 14:30:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.910 nvme0n1 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.910 14:30:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.910 14:30:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.846 nvme0n1 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:01.846 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.847 14:30:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.227 nvme0n1 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.227 14:30:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.164 nvme0n1 00:32:04.164 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.164 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.164 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.164 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.164 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.165 14:30:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.539 nvme0n1 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.539 nvme0n1 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.539 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.540 14:30:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.799 nvme0n1 00:32:05.799 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.799 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.799 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.799 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.799 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.799 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.058 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.059 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.320 nvme0n1 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.320 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.581 nvme0n1 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.582 14:30:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.843 nvme0n1 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.843 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.102 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.103 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.362 nvme0n1 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.362 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.624 nvme0n1 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:07.624 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.625 14:30:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.885 nvme0n1 00:32:07.885 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.885 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.885 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.885 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.885 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.885 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.144 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.145 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.404 nvme0n1 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.404 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.405 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.663 nvme0n1 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.663 14:30:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:08.663 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.664 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.922 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.180 nvme0n1 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.181 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.440 nvme0n1 00:32:09.440 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.699 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.700 14:30:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.958 nvme0n1 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.958 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.218 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.219 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.477 nvme0n1 00:32:10.477 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.477 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.477 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.477 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.477 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.477 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.736 14:30:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.994 nvme0n1 00:32:10.994 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.994 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.994 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.994 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.994 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.994 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.994 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.995 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.561 nvme0n1 00:32:11.561 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.561 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.561 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.561 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.561 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.561 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.820 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.821 14:30:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.391 nvme0n1 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.391 14:30:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.366 nvme0n1 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.366 14:30:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.955 nvme0n1 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.955 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.956 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.522 nvme0n1 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.522 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.782 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.783 14:30:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.717 nvme0n1 00:32:15.717 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.717 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.717 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.717 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.717 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.975 14:30:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.909 nvme0n1 00:32:16.909 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.909 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.909 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.909 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.909 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.167 14:30:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.099 nvme0n1 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.099 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.358 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.359 14:30:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.294 nvme0n1 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:19.294 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.295 14:30:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 nvme0n1 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.690 14:30:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 nvme0n1 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.690 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.951 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.211 nvme0n1 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.211 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.469 nvme0n1 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.469 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.470 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.727 nvme0n1 00:32:21.727 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.727 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.727 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.727 14:30:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.727 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.727 14:30:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.727 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.727 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.727 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.727 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.728 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.986 nvme0n1 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.986 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.244 nvme0n1 00:32:22.244 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.244 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.244 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.244 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.244 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:22.503 14:30:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:22.504 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.504 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.504 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.762 nvme0n1 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.762 14:30:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.762 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.021 nvme0n1 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.021 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.281 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.540 nvme0n1 00:32:23.540 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.540 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.540 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.540 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.540 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.540 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.541 14:30:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.800 nvme0n1 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:23.800 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.801 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.061 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.320 nvme0n1 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.320 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.321 14:30:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 nvme0n1 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:24.889 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:24.890 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:24.890 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:24.890 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.890 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.890 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.457 nvme0n1 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.457 14:30:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.715 nvme0n1 00:32:25.715 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.715 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.715 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.715 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.715 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.973 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.230 nvme0n1 00:32:26.230 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.230 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.230 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.231 14:30:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.214 nvme0n1 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:27.214 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.215 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.798 nvme0n1 00:32:27.798 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.798 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.798 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.799 14:30:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.799 14:30:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.799 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.365 nvme0n1 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.365 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.624 14:30:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.190 nvme0n1 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:29.190 14:30:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:29.191 14:30:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.191 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.191 14:30:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.753 nvme0n1 00:32:29.753 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.753 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.753 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.753 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.753 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.753 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE3NTE0ZDBmZjc3Yjk2MzRiOTZjOGRkZmU1ZGNiMGP2Poh7: 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: ]] 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2YxZjA4YzNkYjM3YmM5NDBmMWE3NjE1YmVjYWI1NTE0YzYxODY5ZTg5MTdjMDkyZWQwZGVkZTZhMjZmZjU5YhQXhHI=: 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.010 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.011 14:30:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.941 nvme0n1 00:32:30.941 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.941 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.941 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.941 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.941 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.198 14:30:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.130 nvme0n1 00:32:32.130 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.130 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.130 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.130 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.130 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.130 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmM0Y2ZhYTg3MWYzNzZiOTJjZWE3MTBlZDVlOThlZWJQpsMC: 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: ]] 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTIzNWFjZGY4NDk4Yjc1ZDQyZWVhNDkwNGEzN2IxYmTBfGLg: 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.388 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.389 14:30:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.321 nvme0n1 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZmMyOWVhMGIxMmRlYTIxNGVjZmJjNWViYjE1NzRjMDA3MTdhM2E4YzJmYTVhMTUywArm9A==: 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRlMjgwYzQwY2I3Y2NlM2UyMTA5M2FhMjMwNWI4NTe/31X+: 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.321 14:31:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.691 nvme0n1 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGRjMTMwY2M4OGE0OWNjY2IwYmIxMTljMzFkZWJmNzlmMmNlYjI4NGE0ZWNlOWEzOTc2MGM5OTFlNThmMzliOJnPsSM=: 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.691 14:31:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.623 nvme0n1 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAwNTJlMjY2YWJiZmJlMzdjOGZmNTY2MzllYzA0NTE1ZDY5YWFjNTA0ZWRjNWFmucbEqw==: 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDFhYjcxYzIyOGM1NzE4MjEyNzkwZmFiY2FjOTgwMjE4Mjg4YmI2YmMyMmE3N2I1HjBE4A==: 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.623 14:31:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 request: 00:32:35.881 { 00:32:35.881 "name": "nvme0", 00:32:35.881 "trtype": "rdma", 00:32:35.881 "traddr": "192.168.100.8", 00:32:35.881 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:35.881 "adrfam": "ipv4", 00:32:35.881 "trsvcid": "4420", 00:32:35.881 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:35.881 "method": "bdev_nvme_attach_controller", 00:32:35.881 "req_id": 1 00:32:35.881 } 00:32:35.881 Got JSON-RPC error response 00:32:35.881 response: 00:32:35.881 { 00:32:35.881 "code": -5, 00:32:35.881 "message": "Input/output error" 00:32:35.881 } 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.881 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.139 request: 00:32:36.139 { 00:32:36.139 "name": "nvme0", 00:32:36.139 "trtype": "rdma", 00:32:36.139 "traddr": "192.168.100.8", 00:32:36.139 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:36.139 "adrfam": "ipv4", 00:32:36.139 "trsvcid": "4420", 00:32:36.139 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:36.139 "dhchap_key": "key2", 00:32:36.139 "method": "bdev_nvme_attach_controller", 00:32:36.139 "req_id": 1 00:32:36.139 } 00:32:36.139 Got JSON-RPC error response 00:32:36.139 response: 00:32:36.139 { 00:32:36.139 "code": -5, 00:32:36.139 "message": "Input/output error" 00:32:36.139 } 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.139 request: 00:32:36.139 { 00:32:36.139 "name": "nvme0", 00:32:36.139 "trtype": "rdma", 00:32:36.139 "traddr": "192.168.100.8", 00:32:36.139 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:36.139 "adrfam": "ipv4", 00:32:36.139 "trsvcid": "4420", 00:32:36.139 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:36.139 "dhchap_key": "key1", 00:32:36.139 "dhchap_ctrlr_key": "ckey2", 00:32:36.139 "method": "bdev_nvme_attach_controller", 00:32:36.139 "req_id": 1 00:32:36.139 } 00:32:36.139 Got JSON-RPC error response 00:32:36.139 response: 00:32:36.139 { 00:32:36.139 "code": -5, 00:32:36.139 "message": "Input/output error" 00:32:36.139 } 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:32:36.139 rmmod nvme_rdma 00:32:36.139 rmmod nvme_fabrics 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 239920 ']' 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 239920 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 239920 ']' 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 239920 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:36.139 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 239920 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 239920' 00:32:36.396 killing process with pid 239920 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 239920 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 239920 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:36.396 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:36.653 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:36.653 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:36.653 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:36.653 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:36.653 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:36.653 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:32:36.653 14:31:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:32:38.025 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:38.025 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:38.025 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:38.025 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:38.025 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:38.025 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:38.025 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:38.025 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:38.025 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:38.025 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:38.283 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:38.283 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:38.283 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:38.283 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:38.283 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:38.283 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:39.217 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:32:39.217 14:31:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.M6Q /tmp/spdk.key-null.NqQ /tmp/spdk.key-sha256.O9T /tmp/spdk.key-sha384.a94 /tmp/spdk.key-sha512.Blq /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:32:39.217 14:31:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:32:40.589 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:40.589 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:40.589 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:40.589 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:40.589 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:40.589 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:40.589 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:40.589 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:40.589 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:40.589 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:40.589 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:40.589 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:40.589 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:40.589 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:40.589 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:40.589 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:40.589 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:40.589 00:32:40.589 real 0m57.213s 00:32:40.589 user 0m55.863s 00:32:40.589 sys 0m6.634s 00:32:40.589 14:31:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:40.589 14:31:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.589 ************************************ 00:32:40.589 END TEST nvmf_auth_host 00:32:40.589 ************************************ 00:32:40.589 14:31:07 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:32:40.589 14:31:07 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:32:40.589 14:31:07 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:32:40.589 14:31:07 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:32:40.589 14:31:07 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:40.589 14:31:07 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:40.589 14:31:07 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:40.589 14:31:07 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:32:40.589 ************************************ 00:32:40.589 START TEST nvmf_bdevperf 00:32:40.589 ************************************ 00:32:40.589 14:31:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:32:40.847 * Looking for test storage... 00:32:40.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:40.847 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:32:40.848 14:31:07 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:32:43.405 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:32:43.405 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:32:43.405 Found net devices under 0000:81:00.0: mlx_0_0 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:32:43.405 Found net devices under 0000:81:00.1: mlx_0_1 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:43.405 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:32:43.406 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:43.406 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:32:43.406 altname enp129s0f0np0 00:32:43.406 inet 192.168.100.8/24 scope global mlx_0_0 00:32:43.406 valid_lft forever preferred_lft forever 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:32:43.406 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:43.406 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:32:43.406 altname enp129s0f1np1 00:32:43.406 inet 192.168.100.9/24 scope global mlx_0_1 00:32:43.406 valid_lft forever preferred_lft forever 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:32:43.406 192.168.100.9' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:32:43.406 192.168.100.9' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:32:43.406 192.168.100.9' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=250703 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 250703 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 250703 ']' 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.406 [2024-07-24 14:31:10.488454] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:43.406 [2024-07-24 14:31:10.488541] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.406 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.406 [2024-07-24 14:31:10.557301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:43.406 [2024-07-24 14:31:10.640536] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.406 [2024-07-24 14:31:10.640591] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.406 [2024-07-24 14:31:10.640624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.406 [2024-07-24 14:31:10.640636] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.406 [2024-07-24 14:31:10.640645] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.406 [2024-07-24 14:31:10.640729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:43.406 [2024-07-24 14:31:10.640808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:43.406 [2024-07-24 14:31:10.640826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:43.406 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.407 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.673 [2024-07-24 14:31:10.805973] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x175d200/0x17616b0) succeed. 00:32:43.673 [2024-07-24 14:31:10.817049] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x175e750/0x17a2d40) succeed. 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.673 Malloc0 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:43.673 [2024-07-24 14:31:10.985569] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:43.673 { 00:32:43.673 "params": { 00:32:43.673 "name": "Nvme$subsystem", 00:32:43.673 "trtype": "$TEST_TRANSPORT", 00:32:43.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.673 "adrfam": "ipv4", 00:32:43.673 "trsvcid": "$NVMF_PORT", 00:32:43.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.673 "hdgst": ${hdgst:-false}, 00:32:43.673 "ddgst": ${ddgst:-false} 00:32:43.673 }, 00:32:43.673 "method": "bdev_nvme_attach_controller" 00:32:43.673 } 00:32:43.673 EOF 00:32:43.673 )") 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:43.673 14:31:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:43.673 "params": { 00:32:43.673 "name": "Nvme1", 00:32:43.673 "trtype": "rdma", 00:32:43.673 "traddr": "192.168.100.8", 00:32:43.673 "adrfam": "ipv4", 00:32:43.673 "trsvcid": "4420", 00:32:43.673 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.673 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.673 "hdgst": false, 00:32:43.673 "ddgst": false 00:32:43.673 }, 00:32:43.673 "method": "bdev_nvme_attach_controller" 00:32:43.673 }' 00:32:43.673 [2024-07-24 14:31:11.031266] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:43.673 [2024-07-24 14:31:11.031338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250848 ] 00:32:43.930 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.930 [2024-07-24 14:31:11.100458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.930 [2024-07-24 14:31:11.191382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.189 Running I/O for 1 seconds... 00:32:45.122 00:32:45.122 Latency(us) 00:32:45.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.122 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:45.122 Verification LBA range: start 0x0 length 0x4000 00:32:45.122 Nvme1n1 : 1.01 14089.94 55.04 0.00 0.00 9030.70 3640.89 11699.39 00:32:45.122 =================================================================================================================== 00:32:45.122 Total : 14089.94 55.04 0.00 0.00 9030.70 3640.89 11699.39 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=250995 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:45.380 { 00:32:45.380 "params": { 00:32:45.380 "name": "Nvme$subsystem", 00:32:45.380 "trtype": "$TEST_TRANSPORT", 00:32:45.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.380 "adrfam": "ipv4", 00:32:45.380 "trsvcid": "$NVMF_PORT", 00:32:45.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.380 "hdgst": ${hdgst:-false}, 00:32:45.380 "ddgst": ${ddgst:-false} 00:32:45.380 }, 00:32:45.380 "method": "bdev_nvme_attach_controller" 00:32:45.380 } 00:32:45.380 EOF 00:32:45.380 )") 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:32:45.380 14:31:12 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:45.380 "params": { 00:32:45.380 "name": "Nvme1", 00:32:45.380 "trtype": "rdma", 00:32:45.380 "traddr": "192.168.100.8", 00:32:45.380 "adrfam": "ipv4", 00:32:45.380 "trsvcid": "4420", 00:32:45.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.380 "hdgst": false, 00:32:45.380 "ddgst": false 00:32:45.380 }, 00:32:45.380 "method": "bdev_nvme_attach_controller" 00:32:45.380 }' 00:32:45.380 [2024-07-24 14:31:12.686665] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:45.380 [2024-07-24 14:31:12.686745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250995 ] 00:32:45.380 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.639 [2024-07-24 14:31:12.755911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.639 [2024-07-24 14:31:12.838316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.897 Running I/O for 15 seconds... 00:32:48.427 14:31:15 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 250703 00:32:48.427 14:31:15 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:32:49.363 [2024-07-24 14:31:16.683977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:35872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x180900 00:32:49.363 [2024-07-24 14:31:16.684406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.363 [2024-07-24 14:31:16.684425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:35968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.684977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.684990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x180900 00:32:49.364 [2024-07-24 14:31:16.685622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.364 [2024-07-24 14:31:16.685639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.685981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.685997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x180900 00:32:49.365 [2024-07-24 14:31:16.686832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.365 [2024-07-24 14:31:16.686848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.686863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.686880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.686904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.686921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.686936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.686953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.686968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.686984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.686999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.687979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.687994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.688011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.688026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.688043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.688058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.688075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.688090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.688106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.688121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.688138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x180900 00:32:49.366 [2024-07-24 14:31:16.688153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.366 [2024-07-24 14:31:16.688170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x180900 00:32:49.367 [2024-07-24 14:31:16.688415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.688432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.367 [2024-07-24 14:31:16.688447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53052 cdw0:ceebd000 sqhd:0907 p:1 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.690621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.367 [2024-07-24 14:31:16.690661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.367 [2024-07-24 14:31:16.690677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36872 len:8 PRP1 0x0 PRP2 0x0 00:32:49.367 [2024-07-24 14:31:16.690693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-07-24 14:31:16.690754] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:32:49.367 [2024-07-24 14:31:16.694529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.367 [2024-07-24 14:31:16.714283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:49.367 [2024-07-24 14:31:16.717431] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:49.367 [2024-07-24 14:31:16.717460] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:49.367 [2024-07-24 14:31:16.717479] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:32:50.740 [2024-07-24 14:31:17.721692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:50.740 [2024-07-24 14:31:17.721745] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:50.740 [2024-07-24 14:31:17.721982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:50.740 [2024-07-24 14:31:17.722004] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:50.740 [2024-07-24 14:31:17.722018] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:50.740 [2024-07-24 14:31:17.723693] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:50.740 [2024-07-24 14:31:17.725179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:50.740 [2024-07-24 14:31:17.736901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:50.740 [2024-07-24 14:31:17.739519] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:50.740 [2024-07-24 14:31:17.739544] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:50.740 [2024-07-24 14:31:17.739570] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:32:51.306 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 250703 Killed "${NVMF_APP[@]}" "$@" 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=251658 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 251658 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 251658 ']' 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:51.306 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.564 [2024-07-24 14:31:18.704761] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:32:51.564 [2024-07-24 14:31:18.704857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.564 [2024-07-24 14:31:18.743651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:51.564 [2024-07-24 14:31:18.743691] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:51.564 [2024-07-24 14:31:18.743937] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:51.564 [2024-07-24 14:31:18.743962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:51.564 [2024-07-24 14:31:18.743978] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:51.564 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.564 [2024-07-24 14:31:18.745914] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:51.564 [2024-07-24 14:31:18.747614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:51.564 [2024-07-24 14:31:18.759920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:51.564 [2024-07-24 14:31:18.762916] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:32:51.564 [2024-07-24 14:31:18.762955] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:32:51.564 [2024-07-24 14:31:18.762970] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:32:51.564 [2024-07-24 14:31:18.784180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:51.564 [2024-07-24 14:31:18.877895] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.564 [2024-07-24 14:31:18.877956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.565 [2024-07-24 14:31:18.877974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.565 [2024-07-24 14:31:18.877987] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.565 [2024-07-24 14:31:18.877998] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.565 [2024-07-24 14:31:18.878082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.565 [2024-07-24 14:31:18.878149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.565 [2024-07-24 14:31:18.878152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.823 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:51.823 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:32:51.823 14:31:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:51.823 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.823 14:31:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.823 [2024-07-24 14:31:19.033999] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22c2200/0x22c66b0) succeed. 00:32:51.823 [2024-07-24 14:31:19.044896] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22c3750/0x2307d40) succeed. 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.823 Malloc0 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.823 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:52.080 [2024-07-24 14:31:19.206548] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.080 14:31:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 250995 00:32:52.645 [2024-07-24 14:31:19.767119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:32:52.645 [2024-07-24 14:31:19.767155] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:52.646 [2024-07-24 14:31:19.767395] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:52.646 [2024-07-24 14:31:19.767416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:52.646 [2024-07-24 14:31:19.767431] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:32:52.646 [2024-07-24 14:31:19.770678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.646 [2024-07-24 14:31:19.775830] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.646 [2024-07-24 14:31:19.826715] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:00.751 00:33:00.751 Latency(us) 00:33:00.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.751 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:00.751 Verification LBA range: start 0x0 length 0x4000 00:33:00.751 Nvme1n1 : 15.01 10322.70 40.32 8363.46 0.00 6826.14 558.27 1043915.66 00:33:00.751 =================================================================================================================== 00:33:00.751 Total : 10322.70 40.32 8363.46 0.00 6826.14 558.27 1043915.66 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:01.009 rmmod nvme_rdma 00:33:01.009 rmmod nvme_fabrics 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 251658 ']' 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 251658 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 251658 ']' 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 251658 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 251658 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 251658' 00:33:01.009 killing process with pid 251658 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 251658 00:33:01.009 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 251658 00:33:01.575 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:01.575 14:31:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:01.575 00:33:01.575 real 0m20.776s 00:33:01.575 user 1m1.966s 00:33:01.575 sys 0m2.889s 00:33:01.575 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:01.575 14:31:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:01.575 ************************************ 00:33:01.575 END TEST nvmf_bdevperf 00:33:01.575 ************************************ 00:33:01.575 14:31:28 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:33:01.575 14:31:28 nvmf_rdma -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:01.575 14:31:28 nvmf_rdma -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:01.575 14:31:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:01.575 ************************************ 00:33:01.575 START TEST nvmf_target_disconnect 00:33:01.575 ************************************ 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:33:01.575 * Looking for test storage... 00:33:01.575 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.575 14:31:28 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:01.576 14:31:28 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:33:04.108 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:33:04.108 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:33:04.108 Found net devices under 0000:81:00.0: mlx_0_0 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:33:04.108 Found net devices under 0000:81:00.1: mlx_0_1 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:04.108 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:04.109 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:04.109 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:33:04.109 altname enp129s0f0np0 00:33:04.109 inet 192.168.100.8/24 scope global mlx_0_0 00:33:04.109 valid_lft forever preferred_lft forever 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:04.109 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:04.109 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:33:04.109 altname enp129s0f1np1 00:33:04.109 inet 192.168.100.9/24 scope global mlx_0_1 00:33:04.109 valid_lft forever preferred_lft forever 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:04.109 192.168.100.9' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:04.109 192.168.100.9' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:04.109 192.168.100.9' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:04.109 ************************************ 00:33:04.109 START TEST nvmf_target_disconnect_tc1 00:33:04.109 ************************************ 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:04.109 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:04.110 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:33:04.110 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:33:04.110 14:31:31 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:04.367 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.367 [2024-07-24 14:31:31.538239] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:04.367 [2024-07-24 14:31:31.538305] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:04.367 [2024-07-24 14:31:31.538323] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:33:05.301 [2024-07-24 14:31:32.542601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:05.301 [2024-07-24 14:31:32.542644] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:05.301 [2024-07-24 14:31:32.542664] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:33:05.301 [2024-07-24 14:31:32.542703] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:05.301 [2024-07-24 14:31:32.542722] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:05.301 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:33:05.301 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:05.301 Initializing NVMe Controllers 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:05.301 00:33:05.301 real 0m1.130s 00:33:05.301 user 0m0.897s 00:33:05.301 sys 0m0.219s 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 ************************************ 00:33:05.301 END TEST nvmf_target_disconnect_tc1 00:33:05.301 ************************************ 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 ************************************ 00:33:05.301 START TEST nvmf_target_disconnect_tc2 00:33:05.301 ************************************ 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=255080 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 255080 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 255080 ']' 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:05.301 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 [2024-07-24 14:31:32.648620] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:05.301 [2024-07-24 14:31:32.648692] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.559 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.559 [2024-07-24 14:31:32.716433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:05.559 [2024-07-24 14:31:32.803828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.559 [2024-07-24 14:31:32.803897] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.559 [2024-07-24 14:31:32.803926] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.559 [2024-07-24 14:31:32.803938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.559 [2024-07-24 14:31:32.803947] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.559 [2024-07-24 14:31:32.804039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:05.559 [2024-07-24 14:31:32.804101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:05.559 [2024-07-24 14:31:32.804150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:05.559 [2024-07-24 14:31:32.804153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.817 Malloc0 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.817 14:31:32 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.817 [2024-07-24 14:31:33.010112] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdf9330/0xe04e40) succeed. 00:33:05.817 [2024-07-24 14:31:33.021434] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdfa920/0xea4f40) succeed. 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.817 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:06.087 [2024-07-24 14:31:33.192803] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=255112 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:06.087 14:31:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:33:06.087 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.994 14:31:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 255080 00:33:07.994 14:31:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Write completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 Read completed with error (sct=0, sc=8) 00:33:09.370 starting I/O failed 00:33:09.370 [2024-07-24 14:31:36.387508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:09.936 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 255080 Killed "${NVMF_APP[@]}" "$@" 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=255636 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 255636 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 255636 ']' 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:09.936 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:09.936 [2024-07-24 14:31:37.255030] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:09.936 [2024-07-24 14:31:37.255130] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.936 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.195 [2024-07-24 14:31:37.323367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Write completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 Read completed with error (sct=0, sc=8) 00:33:10.195 starting I/O failed 00:33:10.195 [2024-07-24 14:31:37.393073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:10.195 [2024-07-24 14:31:37.395017] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:33:10.195 [2024-07-24 14:31:37.395048] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:33:10.195 [2024-07-24 14:31:37.395061] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:33:10.195 [2024-07-24 14:31:37.406799] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.195 [2024-07-24 14:31:37.406831] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.195 [2024-07-24 14:31:37.406860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.195 [2024-07-24 14:31:37.406872] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.195 [2024-07-24 14:31:37.406882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.195 [2024-07-24 14:31:37.406974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:10.195 [2024-07-24 14:31:37.407039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:10.195 [2024-07-24 14:31:37.407091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:10.196 [2024-07-24 14:31:37.407093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.196 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.454 Malloc0 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.454 [2024-07-24 14:31:37.614154] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17e2330/0x17ede40) succeed. 00:33:10.454 [2024-07-24 14:31:37.625729] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17e3920/0x188df40) succeed. 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.454 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.455 [2024-07-24 14:31:37.798273] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.455 14:31:37 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 255112 00:33:11.389 [2024-07-24 14:31:38.399317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:11.389 qpair failed and we were unable to recover it. 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Read completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 Write completed with error (sct=0, sc=8) 00:33:12.325 starting I/O failed 00:33:12.325 [2024-07-24 14:31:39.404866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:12.325 [2024-07-24 14:31:39.404907] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:12.325 A controller has encountered a failure and is being reset. 00:33:12.325 [2024-07-24 14:31:39.404986] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:12.325 [2024-07-24 14:31:39.406956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:12.325 Controller properly reset. 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Write completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 Read completed with error (sct=0, sc=8) 00:33:13.265 starting I/O failed 00:33:13.265 [2024-07-24 14:31:40.456314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.547 Initializing NVMe Controllers 00:33:16.547 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.547 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.547 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:16.547 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:16.547 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:16.547 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:16.547 Initialization complete. Launching workers. 00:33:16.547 Starting thread on core 1 00:33:16.547 Starting thread on core 2 00:33:16.547 Starting thread on core 3 00:33:16.547 Starting thread on core 0 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:16.547 00:33:16.547 real 0m10.873s 00:33:16.547 user 0m36.565s 00:33:16.547 sys 0m1.858s 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 ************************************ 00:33:16.547 END TEST nvmf_target_disconnect_tc2 00:33:16.547 ************************************ 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 ************************************ 00:33:16.547 START TEST nvmf_target_disconnect_tc3 00:33:16.547 ************************************ 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc3 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=256334 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:33:16.547 14:31:43 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:33:16.547 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.449 14:31:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 255636 00:33:18.449 14:31:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Read completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.386 Write completed with error (sct=0, sc=8) 00:33:19.386 starting I/O failed 00:33:19.387 Read completed with error (sct=0, sc=8) 00:33:19.387 starting I/O failed 00:33:19.387 Read completed with error (sct=0, sc=8) 00:33:19.387 starting I/O failed 00:33:19.387 Write completed with error (sct=0, sc=8) 00:33:19.387 starting I/O failed 00:33:19.387 Write completed with error (sct=0, sc=8) 00:33:19.387 starting I/O failed 00:33:19.387 Read completed with error (sct=0, sc=8) 00:33:19.387 starting I/O failed 00:33:19.387 [2024-07-24 14:31:46.689861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:20.352 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 255636 Killed "${NVMF_APP[@]}" "$@" 00:33:20.352 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:33:20.352 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:20.352 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:20.352 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:20.352 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:20.352 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=256863 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 256863 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@827 -- # '[' -z 256863 ']' 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:20.353 14:31:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:20.353 [2024-07-24 14:31:47.570769] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:20.353 [2024-07-24 14:31:47.570877] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.353 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.353 [2024-07-24 14:31:47.652930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Write completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 Read completed with error (sct=0, sc=8) 00:33:20.353 starting I/O failed 00:33:20.353 [2024-07-24 14:31:47.695211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:20.612 [2024-07-24 14:31:47.746993] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.612 [2024-07-24 14:31:47.747056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.612 [2024-07-24 14:31:47.747093] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.612 [2024-07-24 14:31:47.747111] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.612 [2024-07-24 14:31:47.747141] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.612 [2024-07-24 14:31:47.747240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:20.612 [2024-07-24 14:31:47.747308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:20.612 [2024-07-24 14:31:47.747359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:20.612 [2024-07-24 14:31:47.747364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # return 0 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:21.550 Malloc0 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:21.550 [2024-07-24 14:31:48.629902] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1641330/0x164ce40) succeed. 00:33:21.550 [2024-07-24 14:31:48.641476] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1642920/0x16ecf40) succeed. 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Read completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 Write completed with error (sct=0, sc=8) 00:33:21.550 starting I/O failed 00:33:21.550 [2024-07-24 14:31:48.700913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:21.550 [2024-07-24 14:31:48.816566] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.550 14:31:48 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 256334 00:33:22.484 Read completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Write completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Read completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Write completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Read completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Write completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Write completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Read completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Write completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Write completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Read completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Read completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.484 Write completed with error (sct=0, sc=8) 00:33:22.484 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Write completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 Read completed with error (sct=0, sc=8) 00:33:22.485 starting I/O failed 00:33:22.485 [2024-07-24 14:31:49.706577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:22.485 [2024-07-24 14:31:49.706605] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:22.485 A controller has encountered a failure and is being reset. 00:33:22.485 Resorting to new failover address 192.168.100.9 00:33:22.485 [2024-07-24 14:31:49.706657] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.485 [2024-07-24 14:31:49.706693] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:33:22.485 [2024-07-24 14:31:49.723753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:22.485 Controller properly reset. 00:33:26.671 Initializing NVMe Controllers 00:33:26.671 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:26.671 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:33:26.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:26.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:26.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:26.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:26.671 Initialization complete. Launching workers. 00:33:26.671 Starting thread on core 1 00:33:26.671 Starting thread on core 2 00:33:26.671 Starting thread on core 3 00:33:26.671 Starting thread on core 0 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:33:26.671 00:33:26.671 real 0m10.267s 00:33:26.671 user 0m59.446s 00:33:26.671 sys 0m1.588s 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:26.671 ************************************ 00:33:26.671 END TEST nvmf_target_disconnect_tc3 00:33:26.671 ************************************ 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:26.671 rmmod nvme_rdma 00:33:26.671 rmmod nvme_fabrics 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 256863 ']' 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 256863 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 256863 ']' 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 256863 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 256863 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 256863' 00:33:26.671 killing process with pid 256863 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 256863 00:33:26.671 14:31:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 256863 00:33:26.930 14:31:54 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:26.930 14:31:54 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:26.930 00:33:26.930 real 0m25.439s 00:33:26.930 user 2m3.124s 00:33:26.930 sys 0m5.861s 00:33:26.930 14:31:54 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:26.930 14:31:54 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:26.930 ************************************ 00:33:26.930 END TEST nvmf_target_disconnect 00:33:26.930 ************************************ 00:33:26.930 14:31:54 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:33:26.930 14:31:54 nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.930 14:31:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:26.930 14:31:54 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:33:26.930 00:33:26.930 real 26m21.287s 00:33:26.930 user 83m20.720s 00:33:26.930 sys 3m28.352s 00:33:26.930 14:31:54 nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:26.930 14:31:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:26.930 ************************************ 00:33:26.930 END TEST nvmf_rdma 00:33:26.930 ************************************ 00:33:26.930 14:31:54 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:33:26.930 14:31:54 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:26.930 14:31:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:26.930 14:31:54 -- common/autotest_common.sh@10 -- # set +x 00:33:26.930 ************************************ 00:33:26.930 START TEST spdkcli_nvmf_rdma 00:33:26.930 ************************************ 00:33:26.930 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:33:27.189 * Looking for test storage... 00:33:27.189 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6b85a288-a0c4-e211-af09-001e678e7911 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=6b85a288-a0c4-e211-af09-001e678e7911 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=257773 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 257773 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@827 -- # '[' -z 257773 ']' 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:27.189 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:27.189 [2024-07-24 14:31:54.372748] Starting SPDK v24.05.1-pre git sha1 241d0f3c9 / DPDK 22.11.4 initialization... 00:33:27.189 [2024-07-24 14:31:54.372879] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257773 ] 00:33:27.189 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.189 [2024-07-24 14:31:54.437574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:27.189 [2024-07-24 14:31:54.520172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.189 [2024-07-24 14:31:54.520176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # return 0 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:33:27.447 14:31:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.0 (0x15b3 - 0x1015)' 00:33:29.975 Found 0000:81:00.0 (0x15b3 - 0x1015) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:81:00.1 (0x15b3 - 0x1015)' 00:33:29.975 Found 0000:81:00.1 (0x15b3 - 0x1015) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.0: mlx_0_0' 00:33:29.975 Found net devices under 0000:81:00.0: mlx_0_0 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:29.975 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:81:00.1: mlx_0_1' 00:33:29.976 Found net devices under 0000:81:00.1: mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:29.976 12: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:29.976 link/ether 24:8a:07:4b:f4:30 brd ff:ff:ff:ff:ff:ff 00:33:29.976 altname enp129s0f0np0 00:33:29.976 inet 192.168.100.8/24 scope global mlx_0_0 00:33:29.976 valid_lft forever preferred_lft forever 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:29.976 13: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:29.976 link/ether 24:8a:07:4b:f4:31 brd ff:ff:ff:ff:ff:ff 00:33:29.976 altname enp129s0f1np1 00:33:29.976 inet 192.168.100.9/24 scope global mlx_0_1 00:33:29.976 valid_lft forever preferred_lft forever 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:29.976 192.168.100.9' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:29.976 192.168.100.9' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:29.976 192.168.100.9' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:29.976 14:31:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:29.976 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:29.976 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:29.976 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:29.976 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:29.976 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:29.976 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:29.976 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:29.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:29.977 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:29.977 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:29.977 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:29.977 ' 00:33:32.512 [2024-07-24 14:31:59.791190] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f750e0/0x1e12240) succeed. 00:33:32.512 [2024-07-24 14:31:59.803936] rdma.c:2576:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f765e0/0x1e5d240) succeed. 00:33:33.889 [2024-07-24 14:32:01.112202] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:33:36.423 [2024-07-24 14:32:03.403610] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:33:38.329 [2024-07-24 14:32:05.346098] rdma.c:3031:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:33:39.767 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:39.767 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:39.767 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:39.767 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:39.767 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:39.767 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:39.767 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:39.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:39.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:39.767 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:39.767 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:39.767 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:33:39.767 14:32:06 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:40.336 14:32:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:40.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:40.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:40.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:40.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:33:40.336 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:33:40.336 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:40.336 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:40.336 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:40.336 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:40.336 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:40.336 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:40.336 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:40.336 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:40.336 ' 00:33:45.619 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:45.619 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:45.619 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:45.619 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:45.619 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:33:45.619 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:33:45.619 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:45.619 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:45.619 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:45.619 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:45.619 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:45.619 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:45.619 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:45.619 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 257773 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@946 -- # '[' -z 257773 ']' 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # kill -0 257773 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # uname 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 257773 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # echo 'killing process with pid 257773' 00:33:45.619 killing process with pid 257773 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@965 -- # kill 257773 00:33:45.619 14:32:12 spdkcli_nvmf_rdma -- common/autotest_common.sh@970 -- # wait 257773 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:45.877 rmmod nvme_rdma 00:33:45.877 rmmod nvme_fabrics 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:45.877 00:33:45.877 real 0m18.900s 00:33:45.877 user 0m40.524s 00:33:45.877 sys 0m2.568s 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:45.877 14:32:13 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:45.877 ************************************ 00:33:45.877 END TEST spdkcli_nvmf_rdma 00:33:45.877 ************************************ 00:33:45.877 14:32:13 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:45.877 14:32:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:45.877 14:32:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:45.877 14:32:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:45.877 14:32:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:45.877 14:32:13 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:45.877 14:32:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:45.877 14:32:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:45.878 14:32:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:45.878 14:32:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:45.878 14:32:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:45.878 14:32:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:45.878 14:32:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:45.878 14:32:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:45.878 14:32:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:45.878 14:32:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:45.878 14:32:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:45.878 14:32:13 -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:45.878 14:32:13 -- common/autotest_common.sh@10 -- # set +x 00:33:45.878 14:32:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:45.878 14:32:13 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:33:45.878 14:32:13 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:33:45.878 14:32:13 -- common/autotest_common.sh@10 -- # set +x 00:33:47.778 INFO: APP EXITING 00:33:47.778 INFO: killing all VMs 00:33:47.778 INFO: killing vhost app 00:33:47.778 INFO: EXIT DONE 00:33:49.154 Waiting for block devices as requested 00:33:49.154 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:33:49.154 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:49.154 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:49.412 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:49.412 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:49.412 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:49.412 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:49.412 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:49.672 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:49.672 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:49.672 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:49.672 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:49.932 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:49.932 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:49.932 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:50.190 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:50.190 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:51.565 Cleaning 00:33:51.565 Removing: /var/run/dpdk/spdk0/config 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:51.565 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:51.565 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:51.565 Removing: /var/run/dpdk/spdk1/config 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:51.565 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:51.565 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:51.565 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:51.565 Removing: /var/run/dpdk/spdk2/config 00:33:51.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:51.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:51.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:51.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:51.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:51.565 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:51.823 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:51.823 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:51.823 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:51.823 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:51.823 Removing: /var/run/dpdk/spdk3/config 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:51.823 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:51.823 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:51.823 Removing: /var/run/dpdk/spdk4/config 00:33:51.823 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:51.823 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:51.823 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:51.823 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:51.823 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:51.823 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:51.823 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:51.824 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:51.824 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:51.824 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:51.824 Removing: /dev/shm/bdevperf_trace.pid188705 00:33:51.824 Removing: /dev/shm/bdevperf_trace.pid79122 00:33:51.824 Removing: /dev/shm/bdev_svc_trace.1 00:33:51.824 Removing: /dev/shm/nvmf_trace.0 00:33:51.824 Removing: /dev/shm/spdk_tgt_trace.pid4160394 00:33:51.824 Removing: /var/run/dpdk/spdk0 00:33:51.824 Removing: /var/run/dpdk/spdk1 00:33:51.824 Removing: /var/run/dpdk/spdk2 00:33:51.824 Removing: /var/run/dpdk/spdk3 00:33:51.824 Removing: /var/run/dpdk/spdk4 00:33:51.824 Removing: /var/run/dpdk/spdk_pid110481 00:33:51.824 Removing: /var/run/dpdk/spdk_pid112680 00:33:51.824 Removing: /var/run/dpdk/spdk_pid153882 00:33:51.824 Removing: /var/run/dpdk/spdk_pid157410 00:33:51.824 Removing: /var/run/dpdk/spdk_pid187211 00:33:51.824 Removing: /var/run/dpdk/spdk_pid187916 00:33:51.824 Removing: /var/run/dpdk/spdk_pid188705 00:33:51.824 Removing: /var/run/dpdk/spdk_pid191303 00:33:51.824 Removing: /var/run/dpdk/spdk_pid195603 00:33:51.824 Removing: /var/run/dpdk/spdk_pid196269 00:33:51.824 Removing: /var/run/dpdk/spdk_pid196914 00:33:51.824 Removing: /var/run/dpdk/spdk_pid197575 00:33:51.824 Removing: /var/run/dpdk/spdk_pid197953 00:33:51.824 Removing: /var/run/dpdk/spdk_pid200710 00:33:51.824 Removing: /var/run/dpdk/spdk_pid200712 00:33:51.824 Removing: /var/run/dpdk/spdk_pid203615 00:33:51.824 Removing: /var/run/dpdk/spdk_pid204005 00:33:51.824 Removing: /var/run/dpdk/spdk_pid204400 00:33:51.824 Removing: /var/run/dpdk/spdk_pid204923 00:33:51.824 Removing: /var/run/dpdk/spdk_pid204934 00:33:51.824 Removing: /var/run/dpdk/spdk_pid206376 00:33:51.824 Removing: /var/run/dpdk/spdk_pid208176 00:33:51.824 Removing: /var/run/dpdk/spdk_pid209486 00:33:51.824 Removing: /var/run/dpdk/spdk_pid210793 00:33:51.824 Removing: /var/run/dpdk/spdk_pid212100 00:33:51.824 Removing: /var/run/dpdk/spdk_pid213406 00:33:51.824 Removing: /var/run/dpdk/spdk_pid217356 00:33:51.824 Removing: /var/run/dpdk/spdk_pid217684 00:33:51.824 Removing: /var/run/dpdk/spdk_pid219083 00:33:51.824 Removing: /var/run/dpdk/spdk_pid219817 00:33:51.824 Removing: /var/run/dpdk/spdk_pid223541 00:33:51.824 Removing: /var/run/dpdk/spdk_pid225515 00:33:51.824 Removing: /var/run/dpdk/spdk_pid229120 00:33:51.824 Removing: /var/run/dpdk/spdk_pid236906 00:33:51.824 Removing: /var/run/dpdk/spdk_pid236958 00:33:51.824 Removing: /var/run/dpdk/spdk_pid250848 00:33:51.824 Removing: /var/run/dpdk/spdk_pid250995 00:33:51.824 Removing: /var/run/dpdk/spdk_pid254916 00:33:51.824 Removing: /var/run/dpdk/spdk_pid255112 00:33:51.824 Removing: /var/run/dpdk/spdk_pid256334 00:33:51.824 Removing: /var/run/dpdk/spdk_pid257773 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4158849 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4159574 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4160394 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4160827 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4161519 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4161659 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4162368 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4162386 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4162628 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4165764 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4166745 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4167001 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4167240 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4167440 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4167628 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4167785 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4167943 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4168130 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4168703 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4171050 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4171220 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4171382 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4171385 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4171816 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4171825 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4172248 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4172261 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4172509 00:33:51.824 Removing: /var/run/dpdk/spdk_pid4172561 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4172723 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4172742 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4173226 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4173386 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4173587 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4173753 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4173778 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4173962 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4174122 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4174281 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4174547 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4174708 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4174867 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4175085 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4175292 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4175455 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4175612 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4175880 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4176036 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4176203 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4176361 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4176628 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4176783 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4176946 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4177200 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4177377 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4177535 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4177692 00:33:52.082 Removing: /var/run/dpdk/spdk_pid4177878 00:33:52.083 Removing: /var/run/dpdk/spdk_pid4178082 00:33:52.083 Removing: /var/run/dpdk/spdk_pid4180564 00:33:52.083 Removing: /var/run/dpdk/spdk_pid53833 00:33:52.083 Removing: /var/run/dpdk/spdk_pid56458 00:33:52.083 Removing: /var/run/dpdk/spdk_pid63601 00:33:52.083 Removing: /var/run/dpdk/spdk_pid66954 00:33:52.083 Removing: /var/run/dpdk/spdk_pid69051 00:33:52.083 Removing: /var/run/dpdk/spdk_pid69708 00:33:52.083 Removing: /var/run/dpdk/spdk_pid79122 00:33:52.083 Removing: /var/run/dpdk/spdk_pid79273 00:33:52.083 Removing: /var/run/dpdk/spdk_pid81906 00:33:52.083 Removing: /var/run/dpdk/spdk_pid85881 00:33:52.083 Removing: /var/run/dpdk/spdk_pid88540 00:33:52.083 Removing: /var/run/dpdk/spdk_pid95042 00:33:52.083 Clean 00:33:52.083 14:32:19 -- common/autotest_common.sh@1447 -- # return 0 00:33:52.083 14:32:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:52.083 14:32:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.083 14:32:19 -- common/autotest_common.sh@10 -- # set +x 00:33:52.083 14:32:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:52.083 14:32:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.083 14:32:19 -- common/autotest_common.sh@10 -- # set +x 00:33:52.083 14:32:19 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:33:52.083 14:32:19 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:33:52.083 14:32:19 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:33:52.083 14:32:19 -- spdk/autotest.sh@391 -- # hash lcov 00:33:52.083 14:32:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:52.083 14:32:19 -- spdk/autotest.sh@393 -- # hostname 00:33:52.083 14:32:19 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-gp-14 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:33:52.341 geninfo: WARNING: invalid characters removed from testname! 00:34:18.901 14:32:46 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:23.083 14:32:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:25.612 14:32:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:28.891 14:32:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:31.470 14:32:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:33.998 14:33:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:34:37.284 14:33:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:37.284 14:33:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:37.284 14:33:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:37.284 14:33:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.284 14:33:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.284 14:33:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.284 14:33:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.284 14:33:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.284 14:33:03 -- paths/export.sh@5 -- $ export PATH 00:34:37.284 14:33:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.284 14:33:03 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:34:37.284 14:33:03 -- common/autobuild_common.sh@440 -- $ date +%s 00:34:37.284 14:33:04 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1721824384.XXXXXX 00:34:37.284 14:33:04 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1721824384.zQ2B0n 00:34:37.284 14:33:04 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:34:37.284 14:33:04 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:34:37.284 14:33:04 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:34:37.284 14:33:04 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:34:37.284 14:33:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:37.284 14:33:04 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:37.284 14:33:04 -- common/autobuild_common.sh@456 -- $ get_config_params 00:34:37.284 14:33:04 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:34:37.284 14:33:04 -- common/autotest_common.sh@10 -- $ set +x 00:34:37.284 14:33:04 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:34:37.284 14:33:04 -- common/autobuild_common.sh@458 -- $ start_monitor_resources 00:34:37.284 14:33:04 -- pm/common@17 -- $ local monitor 00:34:37.284 14:33:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.284 14:33:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.284 14:33:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.284 14:33:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.284 14:33:04 -- pm/common@21 -- $ date +%s 00:34:37.284 14:33:04 -- pm/common@21 -- $ date +%s 00:34:37.284 14:33:04 -- pm/common@25 -- $ sleep 1 00:34:37.284 14:33:04 -- pm/common@21 -- $ date +%s 00:34:37.284 14:33:04 -- pm/common@21 -- $ date +%s 00:34:37.284 14:33:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721824384 00:34:37.284 14:33:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721824384 00:34:37.284 14:33:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721824384 00:34:37.284 14:33:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721824384 00:34:37.284 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721824384_collect-vmstat.pm.log 00:34:37.284 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721824384_collect-cpu-temp.pm.log 00:34:37.284 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721824384_collect-cpu-load.pm.log 00:34:37.284 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721824384_collect-bmc-pm.bmc.pm.log 00:34:37.855 14:33:05 -- common/autobuild_common.sh@459 -- $ trap stop_monitor_resources EXIT 00:34:37.855 14:33:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:34:37.855 14:33:05 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:37.855 14:33:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:37.855 14:33:05 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:37.855 14:33:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:37.855 14:33:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:37.855 14:33:05 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:37.855 14:33:05 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:37.855 14:33:05 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:34:37.855 14:33:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:37.855 14:33:05 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:37.855 14:33:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:37.855 14:33:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:37.855 14:33:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.855 14:33:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:37.855 14:33:05 -- pm/common@44 -- $ pid=272056 00:34:37.855 14:33:05 -- pm/common@50 -- $ kill -TERM 272056 00:34:37.855 14:33:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.855 14:33:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:37.855 14:33:05 -- pm/common@44 -- $ pid=272058 00:34:37.855 14:33:05 -- pm/common@50 -- $ kill -TERM 272058 00:34:37.855 14:33:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.855 14:33:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:37.855 14:33:05 -- pm/common@44 -- $ pid=272060 00:34:37.855 14:33:05 -- pm/common@50 -- $ kill -TERM 272060 00:34:37.855 14:33:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:37.855 14:33:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:37.855 14:33:05 -- pm/common@44 -- $ pid=272090 00:34:37.855 14:33:05 -- pm/common@50 -- $ sudo -E kill -TERM 272090 00:34:37.855 + [[ -n 4052120 ]] 00:34:37.855 + sudo kill 4052120 00:34:37.866 [Pipeline] } 00:34:37.886 [Pipeline] // stage 00:34:37.892 [Pipeline] } 00:34:37.910 [Pipeline] // timeout 00:34:37.916 [Pipeline] } 00:34:37.934 [Pipeline] // catchError 00:34:37.941 [Pipeline] } 00:34:37.966 [Pipeline] // wrap 00:34:37.973 [Pipeline] } 00:34:37.990 [Pipeline] // catchError 00:34:38.001 [Pipeline] stage 00:34:38.004 [Pipeline] { (Epilogue) 00:34:38.022 [Pipeline] catchError 00:34:38.024 [Pipeline] { 00:34:38.041 [Pipeline] echo 00:34:38.043 Cleanup processes 00:34:38.051 [Pipeline] sh 00:34:38.334 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:38.335 272228 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:34:38.335 272321 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:38.350 [Pipeline] sh 00:34:38.632 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:34:38.632 ++ grep -v 'sudo pgrep' 00:34:38.632 ++ awk '{print $1}' 00:34:38.632 + sudo kill -9 272228 00:34:38.644 [Pipeline] sh 00:34:38.924 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:48.906 [Pipeline] sh 00:34:49.189 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:49.189 Artifacts sizes are good 00:34:49.204 [Pipeline] archiveArtifacts 00:34:49.211 Archiving artifacts 00:34:49.409 [Pipeline] sh 00:34:49.691 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:34:49.706 [Pipeline] cleanWs 00:34:49.717 [WS-CLEANUP] Deleting project workspace... 00:34:49.717 [WS-CLEANUP] Deferred wipeout is used... 00:34:49.723 [WS-CLEANUP] done 00:34:49.725 [Pipeline] } 00:34:49.747 [Pipeline] // catchError 00:34:49.762 [Pipeline] sh 00:34:50.042 + logger -p user.info -t JENKINS-CI 00:34:50.052 [Pipeline] } 00:34:50.069 [Pipeline] // stage 00:34:50.075 [Pipeline] } 00:34:50.094 [Pipeline] // node 00:34:50.101 [Pipeline] End of Pipeline 00:34:50.152 Finished: SUCCESS